Abstract
Loop acceleration can be used to prove safety, reachability, runtime bounds, and (non)termination of programs. To this end, a variety of acceleration techniques have been proposed. However, so far all of them have been monolithic, i.e., a single loop could not be accelerated using a combination of several different acceleration techniques. In contrast, we present a calculus that allows for combining acceleration techniques in a modular way and we show how to integrate many existing acceleration techniques into our calculus. Moreover, we propose two novel acceleration techniques that can be incorporated into our calculus seamlessly. Some of these acceleration techniques apply only to nonterminating loops. Thus, combining them with our novel calculus results in a new, modular approach for proving nontermination. An empirical evaluation demonstrates the applicability of our approach, both for loop acceleration and for proving nontermination.
Similar content being viewed by others
1 Introduction
In the last years, loop acceleration techniques have successfully been used to build static analyses for programs operating on integers [3, 10, 22, 27, 29, 44]. Essentially, such techniques extract a quantifierfree firstorder formula \(\psi \) from a singlepath loop \(\mathcal {T}\), i.e., a loop without branching in its body, such that \(\psi \) underapproximates (or is equivalent to) \(\mathcal {T}\). More specifically, each model of the resulting formula \(\psi \) corresponds to an execution of \(\mathcal {T}\) (and vice versa). By integrating such techniques into a suitable program analysis framework [4, 22, 27, 29, 38], whole programs can be transformed into firstorder formulas which can then be analyzed by offtheshelf solvers. Applications include proving safety [38] or reachability [38, 44], deducing bounds on the runtime complexity [27], and proving (non)termination [10, 22].
However, existing acceleration techniques apply only if certain prerequisites are in place. So the power of static analyses built upon loop acceleration depends on the applicability of the underlying acceleration technique.
In this paper, we introduce a calculus which allows for combining several acceleration techniques modularly in order to accelerate a single loop. This not only allows for modular combinations of standalone techniques, but it also enables interactions between different acceleration techniques, allowing them to obtain better results together. Consequently, our calculus can handle classes of loops where all standalone techniques fail. Moreover, we present two novel acceleration techniques and integrate them into our calculus.
One important application of loop acceleration is proving nontermination. As already observed in [22], certain properties of loops—in particular monotonicity of (parts of) the loop condition w.r.t. the loop body—are crucial for both loop acceleration and proving nontermination. In [22], this observation has been exploited to develop a technique for deducing invariants that are helpful to deal with nonterminating as well as terminating loops: For the former, they help to prove nontermination, and for the latter, they help to accelerate the loop.
In this paper, we take the next step by also unifying the actual techniques that are used for loop acceleration and for proving nontermination. To this end, we identify loop acceleration techniques that, if applied in isolation, give rise to nontermination proofs. Furthermore, we show that the combination of such nontermination techniques via our novel calculus for loop acceleration gives rise to nontermination proofs, too. In this way, we obtain a modular framework for combining several different nontermination techniques in order to prove nontermination of a single loop.
In the following, we introduce preliminaries in Sect. 2. Then, we discuss existing acceleration techniques in Sect. 3. In Sect. 4, we present our calculus to combine acceleration techniques and show how existing acceleration techniques can be integrated into our framework. Section 5 lifts existing acceleration techniques to conditional acceleration techniques, which provides additional power in the context of our framework by enabling interactions between different acceleration techniques. Next, we present two novel acceleration techniques and incorporate them into our calculus in Sect. 6. Then, we adapt our calculus and certain acceleration techniques for proving nontermination in Sect. 7. After discussing related work in Sect. 8, we demonstrate the applicability of our approach via an empirical evaluation in Sect. 9 and conclude in Sect. 10.
A conference version of this paper was published in [23]. The present paper provides the following additional contributions:

We present formal correctness proofs for all of our results, which were omitted in [23] for reasons of space.

We present an improved version of the loop acceleration technique from [27, Thm. 3.8] and [28, Thm. 7] that yields simpler formulas.

We prove an informal statement from [23] on using arbitrary existing acceleration techniques in our setting, resulting in the novel Lemma 1.

The adaptation of our calculus and of certain acceleration techniques for proving nontermination (Sect. 7) is completely new.

We extend the empirical evaluation from [23] with extensive experiments comparing the adaptation of our calculus for proving nontermination with other stateoftheart tools for proving nontermination (Sect. 9.2).
2 Preliminaries
We use the notation \(\vec {x}\), \(\vec {y}\), \(\vec {z}\),...for vectors. Let \(\mathscr {C}(\vec {z})\) be the set of closedform expressions over the variables \(\vec {z}\). So \(\mathscr {C}(\vec {z})\) may, for example, be defined to be the smallest set containing all expressions built from \(\vec {z}\), integer constants, and binary function symbols \(\{{+},{},{\cdot },{/},{\exp }\}\) for addition, subtraction, multiplication, division, and exponentiation. However, there is no widely accepted definition of “closed forms”, and the results of the current paper are independent of the precise definition of \(\mathscr {C}(\vec {z})\) (which may use other function symbols). Thus, we leave its definition open to avoid restricting our results unnecessarily. We consider loops of the form
where \(\vec {x}\) is a vector of d pairwise different variables that range over the integers, the loop condition \(\varphi \in FO_{QF} (\mathscr {C}(\vec {x}))\) (which we also call guard) is a finite quantifierfree firstorder formula over the atoms \(\{p>0 \mid p \in \mathscr {C}(\vec {x})\}\), and \(\vec {a} \in \mathscr {C}(\vec {x})^d\) such that the function^{Footnote 1}\(\vec {x} \mapsto \vec {a}\) maps integers to integers. \( Loop \) denotes the set of all such loops.
We identify \(\mathcal {T}_{loop}\) and the pair \(\langle \varphi , \vec {a} \rangle \). Moreover, we identify \(\vec {a}\) and the function \(\vec {x} \mapsto \vec {a}\), where we sometimes write \(\vec {a}(\vec {x})\) to make the variables \(\vec {x}\) explicit. We use the same convention for other (vectors of) expressions. Similarly, we identify the formula \(\varphi (\vec {x})\) (or just \(\varphi \)) with the predicate \(\vec {x} \mapsto \varphi \). We use the standard integer–arithmetic semantics for the symbols occurring in formulas.
Throughout this paper, let n be a designated variable ranging over \(\mathbb {N}= \{0,1,2,\ldots \}\) and let:
Intuitively, the variable n represents the number of loop iterations and \(\vec {x}'\) corresponds to the values of the program variables \(\vec {x}\) after n iterations.
\(\mathcal {T}_{loop}\) induces a relation \({\longrightarrow _{\mathcal {T}_{loop}}}\) on \(\mathbb {Z}^d\):
3 Existing acceleration techniques
In the following (up to and including Sect. 6), our goal is to accelerate \(\mathcal {T}_{loop}\), i.e., to find a formula \(\psi \in FO_{QF} (\mathscr {C}(\vec {y}))\) such that
To see why we use \(\mathscr {C}(\vec {y})\) instead of, e.g., polynomials, consider the loop
Here an acceleration technique synthesizes, e.g., the formula
where \(\left( {\begin{matrix} x_1n\\ 2^n \cdot x_2 \end{matrix}}\right) \) is equivalent to the value of \(\left( {\begin{matrix} x_1\\ x_2 \end{matrix}}\right) \) after n iterations, and the inequation \(x_1  n + 1 > 0\) ensures that \(\mathcal {T}_{exp}\) can be executed at least n times. Clearly, the growth of \(x_2\) cannot be captured by a polynomial, i.e., even the behavior of quite simple loops is beyond the expressiveness of polynomial arithmetic.
In practice, one can restrict our approach to weaker classes of expressions to ease automation, but the presented results are independent of such considerations.
Some acceleration techniques cannot guarantee (equiv), but the resulting formula is an underapproximation of \(\mathcal {T}_{loop}\), i.e., we have
If (equiv) holds, then \(\psi \) is equivalent to \(\mathcal {T}_{loop}\). Similarly, if (approx) holds, then \(\psi \) approximates \(\mathcal {T}_{loop}\).^{Footnote 2}
Definition 1
(Acceleration Technique) An acceleration technique is a partial function
It is sound if the formula \( accel (\mathcal {T}\,)\) approximates \(\mathcal {T}\) for all \(\mathcal {T}\in {{\,\mathrm{dom}\,}}( accel )\). It is exact if the formula \( accel (\mathcal {T}\,)\) is equivalent to \(\mathcal {T}\) for all \(\mathcal {T}\in {{\,\mathrm{dom}\,}}( accel )\).
We now recall several existing acceleration techniques. In Sect. 4, we will see how these techniques can be combined in a modular way. All of them first compute a closed form \(\vec {c} \in \mathscr {C}(\vec {x},n)^d\) for the values of the program variables after n iterations.
Definition 2
(Closed Form) We call \(\vec {c} \in \mathscr {C}(\vec {x},n)^d\) a closed form of \(\mathcal {T}_{loop}\) if
Here, \(\vec {a}^n\) is the nfold application of \(\vec {a}\), i.e., \(\vec {a}^0(\vec {x}) = \vec {x}\) and \(\vec {a}^{n+1}(\vec {x}) = \vec {a}(\vec {a}^n(\vec {x}))\).
To find closed forms, one tries to solve the system of recurrence equations \(\vec {x}^{(n)} = \vec {a}(\vec {x}^{(n1)})\) with the initial condition \(\vec {x}^{(0)} = \vec {x}\). In the sequel, we assume that we can represent \(\vec {a}^n(\vec {x})\) in closed form. Note that one can always do so if \(\vec {a}(\vec {x}) = A\vec {x} + \vec {b}\) with \(A \in \mathbb {Z}^{d \times d}\) and \(\vec {b} \in \mathbb {Z}^d\), i.e., if \(\vec {a}\) is linear. To this end, one considers the matrix \(B :=\left( {\begin{matrix} A &{} \vec {b} \\ \vec {0}^T &{} 1 \end{matrix}}\right) \) and computes its Jordan normal form J where and J is a block diagonal matrix (which has complex entries if B has complex eigenvalues). Then the closed form for \(J^n\) can be given directly (see, e.g., [48]), and \(\vec {a}^n(\vec {x})\) is equal to the first d components of . Moreover, one can compute a closed form if \(\vec {a} = \left( {\begin{matrix} c_1 \cdot x_1 + p_1\\ \ldots \\ c_d \cdot x_d + p_d \end{matrix}}\right) \) where \(c_i \in \mathbb {N}\) and each \(p_i\) is a polynomial over \(x_1,\ldots ,x_{i1}\) [26, 37].
3.1 Acceleration via decrease or increase
The first acceleration technique discussed in this section exploits the following observation: If \(\varphi (\vec {a}(\vec {x}))\) implies \(\varphi (\vec {x})\) and if \(\varphi (\vec {a}^{n1}(\vec {x}))\) holds, then the loop condition \(\varphi \) of \(\mathcal {T}_{loop}\) is satisfied throughout (at least) n loop iterations. So in other words, it requires that the indicator function (or characteristic function) \(I_{\varphi }: \mathbb {Z}^d \rightarrow \{0,1\}\) of \(\varphi \) with \(I_{\varphi }(\vec {x}) = 1 \iff \varphi (\vec {x})\) is monotonically decreasing w.r.t. \(\vec {a}\), i.e., \(I_{\varphi }(\vec {x}) \ge I_{\varphi }(\vec {a}(\vec {x}))\).
Theorem 1
(Acceleration via Monotonic Decrease [44]) If
then the following acceleration technique is exact:
We will prove the more general Theorem 7 in Sect. 5.
So, for example, Theorem 1 accelerates \(\mathcal {T}_{exp}\) to \(\psi _{exp}\). However, the requirement \(\varphi (\vec {a}(\vec {x})) \implies \varphi (\vec {x})\) is often violated in practice. To see this, consider the loop
It cannot be accelerated with Theorem 1 as
A dual acceleration technique to Theorem 1 is obtained by “reversing” the implication in the prerequisites of the theorem, i.e., by requiring
So the resulting dual acceleration technique applies iff \(\varphi \) is a loop invariant of \(\mathcal {T}_{loop}\).^{Footnote 3} Then \(\{\vec {x} \in \mathbb {Z}^d \mid \varphi (\vec {x})\}\) is a recurrent set [36] (see also Sect. 8.2) of \(\mathcal {T}_{loop}\). In other words, this acceleration technique applies if \(I_{\varphi }\) is monotonically increasing w.r.t. \(\vec {a}\).
Theorem 2
(Acceleration via Monotonic Increase) If
then the following acceleration technique is exact:
We will prove the more general Theorem 8 in Sect. 5.
Example 1
As a minimal example, Theorem 2 accelerates
to \(x' = x + n \wedge x > 0\).
3.2 Acceleration via decrease and increase
Both acceleration techniques presented so far have been generalized in [22].
Theorem 3
(Acceleration via Monotonicity [22]) If
then the following acceleration technique is exact:
Proof
Immediate consequence of Theorem 5 and Remark 1, which are proved in Sects. 4 and 5. \(\square \)
Here, \(\varphi _1\) and \(\varphi _3\) are again invariants of the loop. Thus, as in Theorem 2 it suffices to require that they hold before entering the loop. On the other hand, \(\varphi _2\) needs to satisfy a similar condition as in Theorem 1, and thus, it suffices to require that \(\varphi _2\) holds before the last iteration. In such cases, i.e., if
is valid, we also say that \(\varphi _2\) is a converse invariant (w.r.t. \(\varphi _1\)). It is easy to see that Theorem 3 is equivalent to Theorem 1 if \(\varphi _1 \equiv \varphi _3 \equiv \top \) (where \(\top \) denotes logical truth) and it is equivalent to Theorem 2 if \(\varphi _2 \equiv \varphi _3 \equiv \top \).
Example 2
With Theorem 3, \(\mathcal {T}_{non\text {}dec}\) can be accelerated to
by choosing \(\varphi _1 :=x_2 > 0\), \(\varphi _2 :=x_1 > 0\), and \(\varphi _3 :=\top \).
Theorem 3 naturally raises the question: Why do we need two invariants? To see this, consider a restriction of Theorem 3 where \(\varphi _3 :=\top \). It would fail for a loop like
which can easily be handled by Theorem 3 (by choosing \(\varphi _1 :=\top \), \(\varphi _2 :=x_2 > 0\), and \(\varphi _3 :=x_1 > 0\)). The problem is that the converse invariant \(x_2 > 0\) is needed to prove invariance of \(x_1 > 0\). Similarly, a restriction of Theorem 3 where \(\varphi _1 :=\top \) would fail for the following variant of \(\mathcal {T}_{2\text {}invs}\):
Here, the problem is that the invariant \(x_2 > 0\) is needed to prove converse invariance of \(x_1 > 0\).
3.3 Acceleration via metering functions
Another approach for loop acceleration uses metering functions, a variation of classical ranking functions from termination and complexity analysis [28]. While ranking functions give rise to upper bounds on the runtime of loops, metering functions provide lower runtime bounds, i.e., the definition of a metering function \( mf : \mathbb {Z}^d \rightarrow \mathbb {Q}\) ensures that for each \(\vec {x} \in \mathbb {Z}^d\), the loop under consideration can be applied at least \(\lceil mf (\vec {x}) \rceil \) times.
Definition 3
(Metering Function [28]) We call a function \( mf : \mathbb {Z}^d \rightarrow \mathbb {Q}\) a metering function if the following holds:
We can use metering functions to accelerate loops as follows:
Theorem 4
(Acceleration via Metering Functions [27, 28]) Let \( mf \) be a metering function for \(\mathcal {T}_{loop}\). Then, the following acceleration technique is sound:
We will prove the more general Theorem 9 in Sect. 5. In contrast to [27, Thm. 3.8] and [28, Thm. 7], the acceleration technique from Theorem 4 does not conjoin the loop condition \(\varphi \) to the result, which turned out to be superfluous. The reason is that \(0< n < mf (\vec {x}) + 1\) implies \(\varphi \) due to (mfbounded).
Example 3
Using the metering function \((x_1,x_2) \mapsto x_1\), Theorem 4 accelerates \(\mathcal {T}_{exp}\) to
However, synthesizing nontrivial (i.e., nonconstant) metering functions is challenging. Moreover, unless the number of iterations of \(\mathcal {T}_{loop}\) equals \(\lceil mf (\vec {x}) \rceil \) for all \(\vec {x} \in \mathbb {Z}^d\), acceleration via metering functions is not exact.
Linear metering functions can be synthesized via Farkas’ Lemma and SMT solving [28]. However, many loops have only trivial linear metering functions. To see this, reconsider \(\mathcal {T}_{non\text {}dec}\). Here, \((x_1,x_2) \mapsto x_1\) is not a metering function as \(\mathcal {T}_{non\text {}dec}\) cannot be iterated at least \(x_1\) times if \(x_2 \le 0\). Thus, [27] proposes a refinement of [28] based on metering functions of the form \(\vec {x} \mapsto I_{\xi }(\vec {x}) \cdot f(\vec {x})\) where \(\xi \in FO_{QF} (\mathscr {C}(\vec {x}))\) and f is linear. With this improvement, the metering function
can be used to accelerate \(\mathcal {T}_{non\text {}dec}\) to
4 A calculus for modular loop acceleration
All acceleration techniques presented so far are monolithic: Either they accelerate a loop successfully or they fail completely. In other words, we cannot combine several techniques to accelerate a single loop. To this end, we now present a calculus that repeatedly applies acceleration techniques to simplify an acceleration problem resulting from a loop \(\mathcal {T}_{loop}\) until it is solved and hence gives rise to a suitable \(\psi \in FO_{QF} (\mathscr {C}(\vec {y}))\) which approximates or is equivalent to \(\mathcal {T}_{loop}\).
Definition 4
(Acceleration Problem) A tuple
where \(\psi \in FO_{QF} (\mathscr {C}(\vec {y}))\), \(\check{\varphi }, \hat{\varphi } \in FO_{QF} (\mathscr {C}(\vec {x}))\), and \(\vec {a}: \mathbb {Z}^d \rightarrow \mathbb {Z}^d\) is an acceleration problem. It is consistent if \(\psi \) approximates \(\langle \check{\varphi }, \vec {a} \rangle \), exact if \(\psi \) is equivalent to \(\langle \check{\varphi }, \vec {a} \rangle \), and solved if it is consistent and \(\hat{\varphi } \equiv \top \). The canonical acceleration problem of a loop \(\mathcal {T}_{loop}\) is
Example 4
The canonical acceleration problem of \(\mathcal {T}_{non\text {}dec}\) is
The first component \(\psi \) of an acceleration problem is the partial result that has been computed so far. The second component \(\check{\varphi }\) corresponds to the part of the loop condition that has already been processed successfully. As our calculus preserves consistency, \(\psi \) always approximates \(\langle \check{\varphi }, \vec {a} \rangle \). The third component is the part of the loop condition that remains to be processed, i.e., the loop \(\langle \hat{\varphi },\vec {a} \rangle \) still needs to be accelerated. The goal of our calculus is to transform a canonical into a solved acceleration problem.
More specifically, whenever we have simplified a canonical acceleration problem
to
then \(\varphi \equiv \check{\varphi } \wedge \hat{\varphi }\) and
Then it suffices to find some \(\psi _2 \in FO_{QF} (\mathscr {C}(\vec {y}))\) such that
The reason is that we have \({\longrightarrow _{\langle \check{\varphi }, \vec {a} \rangle }} \cap {\longrightarrow _{\langle \hat{\varphi }, \vec {a} \rangle }} = {\longrightarrow _{\langle \check{\varphi }\wedge \hat{\varphi }, \vec {a} \rangle }} = {\longrightarrow _{\langle \varphi , \vec {a} \rangle }}\) and thus
i.e., \(\psi _1 \wedge \psi _2\) approximates \(\mathcal {T}_{loop}\).
Note that the acceleration techniques presented so far would map \(\langle \hat{\varphi }, \vec {a} \rangle \) to some \(\psi _2 \in FO_{QF} (\mathscr {C}(\vec {y}))\) such that
which does not use the information that we have already accelerated \(\langle \check{\varphi }, \vec {a} \rangle \). In Sect. 5, we will adapt all acceleration techniques from Sect. 3 to search for some \(\psi _2 \in FO_{QF} (\mathscr {C}(\vec {y}))\) that satisfies (1) instead of (2), i.e., we will turn them into conditional acceleration techniques.
Definition 5
(Conditional Acceleration) We call a partial function
a conditional acceleration technique. It is sound if
for all \((\langle \chi , \vec {a} \rangle ,\check{\varphi }) \in {{\,\mathrm{dom}\,}}( accel )\), \(\vec {x},\vec {x}' \in \mathbb {Z}^d\), and \(n > 0\). It is exact if additionally
for all \((\langle \chi , \vec {a} \rangle ,\check{\varphi }) \in {{\,\mathrm{dom}\,}}( accel )\), \(\vec {x},\vec {x}' \in \mathbb {Z}^d\), and \(n > 0\).
Note that every acceleration technique gives rise to a conditional acceleration technique in a straightforward way (by disregarding the second argument \(\check{\varphi }\) of \( accel \) in Definition 5). Soundness and exactness can be lifted directly to the conditional setting.
Lemma 1
(Acceleration as Conditional Acceleration) Let \( accel _0\) be an acceleration technique following Definition 1 such that \( accel _0(\langle \chi , \vec {a} \rangle ) \implies \vec {x}' = \vec {a}^n(\vec {x})\) is valid whenever \(\langle \chi , \vec {a} \rangle \in {{\,\mathrm{dom}\,}}( accel _0)\). Then for the conditional acceleration technique \( accel \) given by \( accel (\mathcal {T},\check{\varphi }) := accel _0(\mathcal {T}\,)\), the following holds:

1.
\( accel \) is sound if and only if \( accel _0\) is sound

2.
\( accel \) is exact if and only if \( accel _0\) is exact
Proof
For the “if” direction of 1., we need to show that
if \( accel _0\) is a sound acceleration technique. Thus:
For the “only if” direction of 1., we need to show that
if \( accel \) is a sound conditional acceleration technique.
Thus:
For the “if” direction of 2., soundness of \( accel \) follows from 1. We still need to show that
if \( accel _0\) is an exact acceleration technique. Thus:
For the “only if” direction of 2., soundness of \( accel _0\) follows from 1. We still need to show that
if \( accel \) is an exact conditional acceleration technique. Thus:
\(\square \)
We are now ready to present our acceleration calculus, which combines loop acceleration techniques in a modular way. In the following, w.l.o.g. we assume that formulas are in CNF, and we identify the formula \(\bigwedge _{i=1}^k C_i\) with the set of clauses \(\{C_i \mid 1 \le i \le k\}\).
Definition 6
(Acceleration Calculus) The relation \({\leadsto }\) on acceleration problems is defined by the rule
where \( accel \) is a sound conditional acceleration technique.
A \({\leadsto }\)step is exact (written \({\leadsto _e}\)) if \( accel \) is exact.
So our calculus allows us to pick a subset \(\chi \) (of clauses) from the yet unprocessed condition \(\hat{\varphi }\) and “move” it to \(\check{\varphi }\), which has already been processed successfully. To this end, \(\langle \chi , \vec {a} \rangle \) needs to be accelerated by a conditional acceleration technique, i.e., when accelerating \(\langle \chi , \vec {a} \rangle \) we may assume \(\vec {x} \longrightarrow _{\langle \check{\varphi }, \vec {a} \rangle }^n \vec {x}'\).
With Lemma 1, our calculus allows for combining arbitrary existing acceleration techniques without adapting them. However, many acceleration techniques can easily be turned into more sophisticated conditional acceleration techniques (see Sect. 5), which increases the power of our approach.
Example 5
We continue Example 4, where we fix \(\chi :=x_1>0\) for the first acceleration step. Thus, we first need to accelerate the loop \(\left\langle x_1>0, \left( {\begin{matrix} x_11\\ x_2+1 \end{matrix}}\right) \right\rangle \) to enable a first \(\leadsto \)step, and we need to accelerate \(\left\langle x_2>0, \left( {\begin{matrix} x_11\\ x_2+1 \end{matrix}}\right) \right\rangle \) afterward. The resulting derivation is shown in Fig. 1. Thus, we successfully constructed the formula \(\psi _{non\text {}dec}\), which is equivalent to \(\mathcal {T}_{non\text {}dec}\). Note that here neither of the two steps benefit from the component \(\hat{\varphi }\) of the acceleration problems. We will introduce more powerful conditional acceleration techniques that benefit from \(\hat{\varphi }\) in Sect. 5.
The crucial property of our calculus is the following.
Lemma 2
The relation \(\leadsto \) preserves consistency, and the relation \(\leadsto _e\) preserves exactness.
Proof
For the first part of the lemma, assume
where is consistent and
We get
The first step holds since is consistent and the second step holds since \( accel \) is sound. This proves consistency of
For the second part of the lemma, assume
where is exact and \( accel (\langle \chi , \vec {a} \rangle , \check{\varphi }) = \psi _2\).
We get
which proves exactness of
\(\square \)
Then, the correctness of our calculus follows immediately. The reason is that
implies \(\varphi \equiv \check{\varphi }\).
Theorem 5
(Correctness of \({\leadsto }\)) If
then \(\psi \) approximates \(\mathcal {T}_{loop}\). If
then \(\psi \) is equivalent to \(\mathcal {T}_{loop}\).
Termination of our calculus is trivial, as the size of the third component \(\hat{\varphi }\) of the acceleration problem is decreasing.
Theorem 6
(WellFoundedness of \({\leadsto }\)) The relation \({\leadsto }\) is wellfounded.
5 Conditional acceleration techniques
We now show how to turn the acceleration techniques from Sect. 3 into conditional acceleration techniques, starting with acceleration via monotonic decrease.
Theorem 7
(Conditional Acceleration via Monotonic Decrease) If
then the following conditional acceleration technique is exact:
Proof
For soundness, we need to prove
for all \(m > 0\). We use induction on m. If \(m=1\), then
In the induction step, we have
For exactness, we need to prove
for all \(m>0\), which is trivial. \(\square \)
So we just add \(\check{\varphi }\) to the premise of the implication that needs to be checked to apply acceleration via monotonic decrease. Theorem 2 can be adapted analogously.
Theorem 8
(Conditional Acceleration via Monotonic Increase) If
then the following conditional acceleration technique is exact:
Proof
For soundness, we need to prove
for all \(m>0\). We use induction on m. If \(m=1\), then
In the induction step, we have
For exactness, we need to prove
for all \(m>0\), which is trivial. \(\square \)
Example 6
For the canonical acceleration problem of \(\mathcal {T}_{2\text {}invs}\), we obtain the derivation shown in Fig. 2, where \(\vec {a}_{2\text {}invs} :=\left( {\begin{matrix} x_1+x_2\\ x_21 \end{matrix}}\right) \). While we could also use Theorem 1 for the first step, Theorem 2 is inapplicable in the second step. The reason is that we need the converse invariant \(x_2>0\) to prove invariance of \(x_1 > 0\).
It is not a coincidence that \(\mathcal {T}_{2\text {}invs}\), which could also be accelerated with acceleration via monotonicity (see Theorem 3) directly, can be handled by applying our novel calculus with Theorems 7 and 8.
Remark 1
If applying acceleration via monotonicity to \(\mathcal {T}_{loop}\) yields \(\psi \), then
where either Theorem 7 or Theorem 8 is applied in each \({\leadsto _e}\)step.
Proof
As Theorem 3 applies, we have
where
If \(\varphi _1 \ne \top \), then Theorem 8 applies to \(\langle \varphi _1, \vec {a} \rangle \) with \(\check{\varphi } :=\top \) due to (7), and we obtain:
Next, if \(\varphi _2 \ne \top \), then Theorem 7 applies to \(\langle \varphi _2, \vec {a} \rangle \) with \(\check{\varphi }:=\varphi _1\) due to (8) and we obtain:
Finally, if \(\varphi _3 \ne \top \), then Theorem 8 applies to \(\langle \varphi _3, \vec {a} \rangle \) with \(\check{\varphi }:=\varphi _1 \wedge \varphi _2\) due to (9) and we obtain
\(\square \)
Thus, there is no need for a conditional variant of acceleration via monotonicity. Note that combining Theorems 7 and 8 with our calculus is also useful for loops where acceleration via monotonicity is inapplicable.
Example 7
Consider the following loop, which can be accelerated by splitting its guard into one invariant and two converse invariants.
Let
and let \(x_i^{(m)}\) be the ith component of \(\vec {a}_{2\text {}c\text {}invs}^{m}(\vec {x})\). Starting with the canonical acceleration problem of \(\mathcal {T}_{2\text {}c\text {}invs}\), we obtain the derivation shown in Fig. 3.
Finally, we present a variant of Theorem 4 for conditional acceleration. The idea is similar to the approach for deducing metering functions of the form \(\vec {x} \mapsto I_{\check{\varphi }}(\vec {x}) \cdot f(\vec {x})\) from [27] (see Sect. 3.3 for details). But in contrast to [27], in our setting the “conditional” part \(\check{\varphi }\) does not need to be an invariant of the loop.
Theorem 9
(Conditional Acceleration via Metering Functions) Let \( mf : \mathbb {Z}^d \rightarrow \mathbb {Q}\).
If
then the following conditional acceleration technique is sound:
Proof
We need to prove
for all \(m > 0\). Note that (11) is equivalent to
We use induction on m. If \(m=1\), then
In the induction step, assume
Then, we have:
\(\square \)
6 Acceleration via eventual monotonicity
The combination of the calculus from Sect. 4 and the conditional acceleration techniques from Sect. 5 still fails to handle certain interesting classes of loops. Thus, to improve the applicability of our approach we now present two new acceleration techniques based on eventual monotonicity.
6.1 Acceleration via eventual decrease
All (combinations of) techniques presented so far fail for the following example.
The reason is that \(x_1\) does not behave monotonically, i.e., \(x_1 > 0\) is neither an invariant nor a converse invariant. Essentially, \(\mathcal {T}_{ev\text {}dec}\) proceeds in two phases: In the first (optional) phase, \(x_2\) is positive and hence the value of \(x_1\) is monotonically increasing. In the second phase, \(x_2\) is nonpositive and consequently the value of \(x_1\) decreases (weakly) monotonically. The crucial observation is that once the value of \(x_1\) decreases, it can never increase again. Thus, despite the nonmonotonic behavior of \(x_1\), it suffices to require that \(x_1 > 0\) holds before the first and before the n th loop iteration to ensure that the loop can be iterated at least n times.
Theorem 10
(Acceleration via Eventual Decrease) If \(\varphi (\vec {x}) \equiv \bigwedge _{i=1}^k C_i\) where each clause \(C_i\) contains an inequation \(e_i(\vec {x}) > 0\) such that
then the following acceleration technique is sound:
If \(C_i \equiv e_i > 0\) for all \(i \in [1,k]\), then it is exact.
We will prove the more general Theorem 11 later in this section.
Example 8
With Theorem 10, we can accelerate \(\mathcal {T}_{ev\text {}dec}\) to
as we have
Turning Theorem 10 into a conditional acceleration technique is straightforward.
Theorem 11
(Conditional Acceleration via Eventual Decrease) If we have \(\chi (\vec {x}) \equiv \bigwedge _{i=1}^k C_i\) where each clause \(C_i\) contains an inequation \(e_i(\vec {x}) > 0\) such that
then the following conditional acceleration technique is sound:
If \(C_i \equiv e_i > 0\) for all \(i \in [1,k]\), then it is exact.
Proof
For soundness, we need to show
Assume
This implies
In the following, we show
Then, the claim (17) follows, as we have
Let i be arbitrary but fixed, let \(e= e_i\), and let j be the minimal natural number with
We first prove
for all \(m \in [0,j1]\) by backward induction on m. If \(m=j1\), then
For the induction step, note that (15) implies
Then we have
Now we prove
for all \(m \in [j,n1]\) by induction on m. If \(m=j\), then
In the induction step, we have
this finishes the proof of (20) and hence shows (17).
For exactness, assume \(\chi (\vec {x}) :=\bigwedge _{i=1}^k e_i(\vec {x}) > 0\). We have
\(\square \)
Example 9
Consider the following variant of \(\mathcal {T}_{ev\text {}dec}\)
i.e., we have \(\vec {a} :=\left( {\begin{matrix} x_1 + x_2\\ x_2  x_3\\ x_3 \end{matrix}}\right) \). Starting with its canonical acceleration problem, we get the derivation shown in Fig. 4, where the second step can be performed via Theorem 11 as
implies
6.2 Acceleration via eventual increase
Still, all (combinations of) techniques presented so far fail for
As in the case of \(\mathcal {T}_{ev\text {}dec}\), the value of \(x_1\) does not behave monotonically, i.e., \(x_1 > 0\) is neither an invariant nor a converse invariant. However, this time \(x_1\) is eventually increasing, i.e., once \(x_1\) starts to grow, it never decreases again. Thus, in this case it suffices to require that \(x_1\) is positive and (weakly) increasing.
Theorem 12
(Acceleration via Eventual Increase) If \(\varphi (\vec {x}) \equiv \bigwedge _{i=1}^k C_i\) where each clause \(C_i\) contains an inequation \(e_i(\vec {x}) > 0\) such that
then the following acceleration technique is sound:
We prove the more general Theorem 13 later in this section.
Example 10
With Theorem 12, we can accelerate \(\mathcal {T}_{ev\text {}inc}\) to
as we have
However, Theorem 12 is not exact, as the resulting formula only covers program runs where each \(e_i\) behaves monotonically. So \(\psi _{ev\text {}inc}\) only covers those runs of \(\mathcal {T}_{ev\text {}inc}\) where the initial value of \(x_2\) is nonnegative.
Note that Theorem 12 can also lead to empty underapproximations. For example, Theorem 12 can be used to accelerate \(\mathcal {T}_{exp}\), since the implication
is valid. Then the resulting formula is
which is unsatisfiable. Thus, when implementing Theorem 12 (or its conditional version Theorem 13), one has to check whether the resulting formula is satisfiable to avoid trivial (empty) underapproximations.
Again, turning Theorem 12 into a conditional acceleration technique is straightforward.
Theorem 13
(Conditional Acceleration via Eventual Increase) If we have \(\chi (\vec {x}) \equiv \bigwedge _{i=1}^k C_i\) where each clause \(C_i\) contains an inequation \(e_i(\vec {x}) > 0\) such that
then the following conditional acceleration technique is sound:
Proof
We need to show
Due to \(\vec {x} \longrightarrow _{\langle \check{\varphi }, \vec {a} \rangle }^n \vec {a}^n(\vec {x})\), we have
Let i be arbitrary but fixed and assume
We prove
for all \(0 \le m < n \) by induction on m. Then, we get
and thus \(\chi (\vec {a}^m(\vec {x}))\) for all \(0 \le m < n\) due to (27), and hence, the claim follows. If \(m=0\), then
In the induction step, note that (26) implies
as \(0 \le m < n\). Together with the induction hypothesis (28), we get
By (25), this implies
as desired. \(\square \)
Example 11
Consider the following variant of \(\mathcal {T}_{ev\text {}inc}\).
So we have \(\vec {a} :=\left( {\begin{matrix} x_1 + x_2\\ x_2 + x_3\\ x_3 \end{matrix}}\right) \). Starting with the canonical acceleration problem, we get the derivation shown in Fig. 5, where the second step can be performed via Theorem 13 as
implies
We also considered versions of Theorems 11 and 13 where the inequations in (15) and (25) are strict, but this did not lead to an improvement in our experiments. Moreover, we experimented with a variant of Theorem 13 that splits the loop under consideration into two consecutive loops, accelerates them independently, and composes the results. While such an approach can accelerate loops like \(\psi _{ev\text {}inc}\) exactly, the impact on our experimental results was minimal. Thus, we postpone an indepth investigation of this idea to future work.
7 Proving nontermination via loop acceleration
We now aim for proving nontermination.
Definition 7
((Non)Termination) We call a vector \(\vec {x} \in \mathbb {Z}^d\) a witness of nontermination for \(\mathcal {T}_{loop}\) (denoted \(\vec {x} \longrightarrow _{\mathcal {T}_{loop}}^ \infty \bot \)) if
If there is such a witness, then \(\mathcal {T}_{loop}\) is nonterminating.
Otherwise, \(\mathcal {T}_{loop}\) terminates.
To this end, we search for a formula that characterizes a nonempty set of witnesses of nontermination, called a certificate of nontermination.
Definition 8
(Certificate of NonTermination) We call a formula \(\eta \in FO_{QF} (\mathscr {C}(\vec {x}))\) a certificate of nontermination for \(\mathcal {T}_{loop}\) if \(\eta \) is satisfiable and
Clearly, the loops \(\mathcal {T}_{inc}\) and \(\mathcal {T}_{ev\text {}inc}\) that were used to motivate the acceleration techniques Acceleration via Monotonic Increase (Theorem 2) and Acceleration via Eventual Increase (Theorem 12) are nonterminating: \(\mathcal {T}_{inc}\) diverges for all initial valuations that satisfy its guard \(x > 0\) and \(\mathcal {T}_{ev\text {}inc}\) diverges if the initial values are sufficiently large, such that \(x_1\) remains positive until \(x_2\) becomes nonnegative and hence \(x_1\) starts to increase.
As we will see in the current section, this is not a coincidence: Unsurprisingly, all loops that can be accelerated with Acceleration via Monotonic Increase or Acceleration via Eventual Increase are nonterminating. More interestingly, the same holds for all loops that can be accelerated using our calculus from Sect. 4, as long as all \({\leadsto }\)steps use one of the conditional versions of the aforementioned acceleration techniques, i.e., Conditional Acceleration via Monotonic Increase (Theorem 8) or Conditional Acceleration via Eventual Increase (Theorem 13). Thus, we obtain a novel, modular technique for proving nontermination of loops \(\mathcal {T}_{loop}\).
Recall that derivations of our calculus from Sect. 4 start with canonical acceleration problems (Definition 4) whose first component is
It relates the values of the variables before evaluating the loop ( \(\vec {x}\)) to the values of the variables after evaluating the loop ( \(\vec {x}'\)) using the closed form ( \(\vec {a}^n\)). However, if we are interested in nonterminating runs, then the values of the variables after evaluating the loop are obviously irrelevant. Hence, attempts to prove nontermination operate on a variation of acceleration problems, which we call nontermination problems.
Definition 9
(NonTermination Problem) A tuple
where \(\psi , \check{\varphi }, \hat{\varphi } \in FO_{QF} (\mathscr {C}(\vec {x}))\) and \(\vec {a}: \mathbb {Z}^d \rightarrow \mathbb {Z}^d\) is a nontermination problem. It is consistent if every model of \(\psi \) is a witness of nontermination for \(\langle \check{\varphi }, \vec {a} \rangle \) and solved if it is consistent and \(\hat{\varphi } \equiv \top \). The canonical nontermination problem of a loop \(\mathcal {T}_{loop}\) is
In particular, this means that the technique presented in the current section can also be applied to loops where \(\vec {a}^n\) cannot be expressed in closed form.
Example 12
The canonical nontermination problem of \(\mathcal {T}_{ev\text {}inc}\) is
We use a variation of conditional acceleration techniques (Definition 5), which we call conditional nontermination techniques, to simplify the canonical nontermination problem of the analyzed loop.
Definition 10
(Conditional NonTermination Technique) We call a partial function
a conditional nontermination technique if
for all \((\langle \chi , \vec {a} \rangle ,\check{\varphi }) \in {{\,\mathrm{dom}\,}}( nt )\) and all \(\vec {x} \in \mathbb {Z}^d\).
Thus, we obtain the following variation of our calculus from Sect. 4.
Definition 11
(NonTermination Calculus) The relation \({\leadsto _{ nt }}\) on nontermination problems is defined by the rule
where \( nt \) is a conditional nontermination technique.
Like \({\leadsto }\), the relation \({\leadsto _{ nt }}\) preserves consistency. Hence, we obtain the following theorem, which shows that our calculus is indeed suitable for proving nontermination.
Theorem 14
(Correctness of \({\leadsto _{ nt }}\)) If
and \(\psi \) is satisfiable, then \(\psi \) is a certificate of nontermination for \(\mathcal {T}_{loop}\).
Proof
We prove that our calculus preserves consistency, then the claim follows immediately. Assume
where is consistent and
We get
The first step holds since is consistent and the second step holds since \( nt \) is a conditional nontermination technique.
This proves consistency of
\(\square \)
Analogously to wellfoundedness of \({\leadsto }\), wellfoundedness of \({\leadsto _{ nt }}\) is trivial.
Theorem 15
(WellFoundedness of \({\leadsto _{ nt }}\)) The relation \({\leadsto _{ nt }}\) is wellfounded.
It remains to present nontermination techniques that can be used with our novel calculus. We first derive a nontermination technique from Conditional Acceleration via Monotonic Increase (Theorem 8).
Theorem 16
(NonTermination via Monotonic Increase) If
then
is a conditional nontermination technique.
Proof
We need to prove
To this end, it suffices to prove
by the definition of nontermination (Definition 7). Assume
We prove \(\chi (\vec {a}^m(\vec {x}))\) for all \(m \in \mathbb {N}\) by induction on m. If \(m=0\), then the claim follows immediately. For the induction step, note that \(\vec {x} \longrightarrow ^\infty _{\langle \check{\varphi }, \vec {a} \rangle } \bot \) implies \(\vec {x} \longrightarrow ^{m+1}_{\langle \check{\varphi }, \vec {a} \rangle } \vec {a}^{m+1}(\vec {x})\), which in turn implies \(\check{\varphi }(\vec {a}^m(\vec {x}))\). Together with the induction hypothesis \(\chi (\vec {a}^m(\vec {x}))\), the claim follows from the prerequisites of the theorem. \(\square \)
Example 13
The canonical nontermination problem of \(\mathcal {T}_{inc}\) is
Thus, in order to apply \({\leadsto _{ nt }}\) with Theorem 16, the only possible choice for the formula \(\chi \) is \(x>0\). Furthermore, we have \(\check{\varphi } :=\top \) and \(\vec {a} :=(x+1)\). Hence, Theorem 16 is applicable if the implication
is valid, which is clearly the case. Thus, we get
Since the latter nontermination problem is solved and \(x > 0\) is satisfiable, \(x > 0\) is a certificate of nontermination for \(\mathcal {T}_{inc}\) due to Theorem 14.
Clearly, Theorem 16 is only applicable in very simple cases. To prove nontermination of more complex examples, we now derive a conditional nontermination technique from Conditional Acceleration via Eventual Increase (Theorem 13).
Theorem 17
(NonTermination via Eventual Increase) If we have \(\chi (\vec {x}) \equiv \bigwedge _{i=1}^k C_i\) where each clause \(C_i\) contains an inequation \(e_i(\vec {x}) > 0\) such that
then
is a conditional nontermination technique.
Proof
Let \(\chi ' :=\bigwedge _{i=1}^k 0 < e_i(\vec {x}) \le e_i(\vec {a}(\vec {x}))\). We need to prove
Then it suffices to prove
since \(\chi '\) implies \(\chi \). By the prerequisites of the theorem, we have \(\check{\varphi } \wedge \chi '(\vec {x}) \implies \chi '(\vec {a}(\vec {x}))\). Thus, Theorem 16 applies to \(\langle \chi ', \vec {a} \rangle \). Hence, the claim (29) follows. \(\square \)
Example 14
We continue Example 12. To apply \({\leadsto _{ nt }}\) with Theorem 17 to the canonical nontermination problem of \(\mathcal {T}_{ev\text {}inc}\), the only possible choice for the formula \(\chi \) is \(x_1>0\). Moreover, we again have \(\check{\varphi } :=\top \), and \(\vec {a} :=\left( {\begin{matrix} x_1+x_2\\ x_2+1 \end{matrix}}\right) \). Thus, Theorem 17 is applicable if
is valid. Since we have \(x_1 \le x_1 + x_2 \iff x_2 \ge 0\) and \(x_1 + x_2 \le x_1 + 2 \cdot x_2 + 1 \iff x_2 + 1 \ge 0\), this is clearly the case. Hence, we get
Since \(0 < x_1 \le x_1 + x_2 \equiv x_1 > 0 \wedge x_2 \ge 0\) is satisfiable, \(x_1 > 0 \wedge x_2 \ge 0\) is a certificate of nontermination for \(\mathcal {T}_{ev\text {}inc}\) due to Theorem 14.
Of course, some nonterminating loops do not behave (eventually) monotonically, as the following example illustrates.
Example 15
Consider the loop
Theorem 16 is inapplicable, since
Furthermore, Theorem 17 is inapplicable, since
However, \(\mathcal {T}_{\textit{fixpoint}}\) has fixpoints, i.e., there are valuations such that \(\vec {x} = \vec {a}(\vec {x})\). Therefore, it can be handled by existing approaches like [22, Thm. 12]. As the following theorem shows, such techniques can also be embedded into our calculus.
Theorem 18
(NonTermination via Fixpoints) For each entity e, let \(\mathcal {V}(e)\) denote the set of variables occurring in e. Moreover, we define
Let \(\chi (\vec {x}) \equiv \bigwedge _{i=1}^k C_i\), and for each \(i \in [1,k]\), assume that \(e_i(\vec {x}) > 0\) is an inequation that occurs in \(C_i\). Then,
is a conditional nontermination technique.
Proof
Let
We need to prove
Then, it suffices to prove
since \(\chi '\) implies \(\chi \). By construction, we have
Thus, Theorem 16 applies to \(\langle \chi ', \vec {a} \rangle \). Hence, the claim (30) follows. \(\square \)
Example 16
We continue Example 15 by showing how to apply Theorem 18 to \(\mathcal {T}_{\textit{fixpoint}}\), i.e., we have \(\chi :=x_1 >0\), \(\check{\varphi } :=\top \), and \(\vec {a} :=\left( {\begin{matrix} x_2\\ x_1 \end{matrix}}\right) \). Thus, we get
So starting with the canonical nontermination problem of \(\mathcal {T}_{\textit{fixpoint}}\), we get
Since the formula \(x_1 > 0 \wedge x_1 = x_2\) is satisfiable, \(x_1 > 0 \wedge x_1 = x_2\) is a certificate of nontermination for \(\mathcal {T}_{\textit{fixpoint}}\) by Theorem 14.
Like the acceleration techniques from Theorems 12 and 13, the nontermination techniques from Theorems 17 and 18 can result in empty underapproximations. Thus, when integrating these techniques into our calculus, one should check the resulting formula for satisfiability after each step to detect fruitless derivations early.
We conclude this section with a more complex example, which shows how the presented nontermination techniques can be combined to find certificates of nontermination.
Example 17
Consider the following loop:
So we have
and
Then, the canonical nontermination problem is
First, our implementation applies Theorem 16 to \(x_1 > 0\) (as \(x_1> 0 \implies 1 > 0\)), resulting in
Next, it applies Theorem 17 to \(x_3 > 0\), which is possible since
is valid. Note that this implication breaks if one removes \(x_1 > 0\) from the premise, i.e., Theorem 17 does not apply to \(x_3 > 0\) without applying Theorem 16 to \(x_1 > 0\) before. This shows that our calculus is more powerful than “the sum” of the underlying nontermination techniques. Hence, we obtain the following nontermination problem:
Here, we simplified
to
Finally, neither Theorem 16 nor Theorem 17 applies to \(x_4 + 1 > 0\), since \(x_4\) does not behave (eventually) monotonically: Its value after n iterations is given by \((1)^n \cdot x_4^{ init }\), where \(x_4^{ init }\) denotes the initial value of \(x_4\). Thus, we apply Theorem 18 and we get
which is solved. Here, we simplified the subformula \(x_4 + 1 > 0 \wedge x_4 = x_4\) that results from the last acceleration step to \(x_4 = 0\).
This shows that our calculus allows for applying Theorem 18 to loops that do not have a fixpoint. The reason is that it suffices to require that a subset of the program variables remain unchanged, whereas the values of other variables may still change.
Since
is satisfiable, it is a certificate of nontermination due to Theorem 14.
8 Related work
The related work for our paper splits into papers on acceleration and papers on methods specifically designed to prove nontermination. In both cases, one major difference between our approach and the approaches in the literature is that we enable a modular analysis that allows for combining completely unrelated acceleration or nontermination techniques to process a loop in an iterative way and to reuse information obtained by earlier proof steps.
8.1 Related work on acceleration
Accelerationlike techniques are also used in overapproximating settings (see, e.g., [21, 33, 34, 41, 42, 47, 49, 51]), whereas we consider exact and underapproximating loop acceleration techniques. As many related approaches are discussed in Sect. 3, we only mention three more techniques here.
First, [6, 9] presents an exact acceleration technique for finite monoid affine transformations (FMATs), i.e., loops with linear arithmetic whose body is of the form \(\vec {x} \leftarrow A\vec {x} + \vec {b}\) where \(\{A^i \mid i \in \mathbb {N}\}\) is finite. For such loops, Presburger Arithmetic is sufficient to construct an equivalent formula \(\psi \), i.e., it can be expressed in a decidable logic. In general, this is clearly not the case for the techniques presented in the current paper (which may even synthesize nonpolynomial closed forms, see \(\mathcal {T}_{exp}\)). As a consequence and in contrast to our technique, this approach cannot handle loops where the values of variables grow superlinearly (i.e., it cannot handle examples like \(\mathcal {T}_{2\text {}invs}\)). Implementations are available in the tools FAST [3] and Flata [39]. Further theoretical results on linear transformations whose nfold closure is definable in (extensions of) Presburger Arithmetic can be found in [7].
Second, [8] shows that octagonal relations can be accelerated exactly. Such relations are defined by a finite conjunction \(\xi \) of inequations of the form \(\pm x \pm y \le c\), \(x,y \in \vec {x} \cup \vec {x}'\), \(c \in \mathbb {Z}\). Then \(\xi \) induces the relation \(\vec {x} \longrightarrow _\xi \vec {x}' \iff \xi (\vec {x}, \vec {x}')\). In [43], it is proven that such relations can even be accelerated in polynomial time. This generalizes earlier results for difference bound constraints [17]. As in the case of FMATs, the resulting formula can be expressed in Presburger Arithmetic. In contrast to the loops considered in the current paper where \(\vec {x}'\) is uniquely determined by \(\vec {x}\), octagonal relations can represent nondeterministic programs. Therefore and due to the restricted form of octagonal relations, the work from [8, 43] is orthogonal to ours.
Third, Albert et al. recently presented a technique to find metering functions via loop specialization, which is automated via MAXSMT solving [1]. This approach could be integrated into our framework via Theorem 9. However, the technique from [1] focuses on multipath loops, whereas we focus on singlepath loops. One of the main reasons for our restriction to singlepath loops is that their closed form (Definition 2) can often be computed automatically in practice. In contrast, for multipath loops, it is less clear how to obtain closed forms.
8.2 Related work on proving nontermination
In the following, we focus on approaches for proving nontermination of programs that operate on (unbounded) integer numbers as data.
Many approaches to proving nontermination are based on finding recurrent sets [36]. A recurrent set describes program configurations from which one can take a step to a configuration that is also in the recurrent set. Thus, a recurrent set that includes an initial configuration implies nontermination of the program. In the setting of this paper, a certificate of nontermination \(\psi (\vec {x})\) for a loop \(\langle \varphi , \vec {a} \rangle \) induces the recurrent set
As long as our calculus is used with the nontermination techniques presented in the current paper only (i.e., Theorems 16–18), it even holds that \(\{\vec {x} \in \mathbb {Z}^d \mid \psi (\vec {x})\}\) is a recurrent set. Conversely, a formula characterizing a nonempty recurrent set of a loop is also a certificate of nontermination. Thus, our calculus could also make use of other nontermination techniques that produce recurrent sets expressed by formulas in \( FO_{QF} (\mathscr {C}(\vec {x}))\).
Recurrent sets are often synthesized by searching for suitable parameter values for template formulas [14, 18, 36, 45, 46, 55] or for safety proofs [16, 35, 54]. In contrast to these searchbased techniques, our current techniques use constraint solving only to check implications. As also indicated by our experiments (Sect. 9), this aspect leads to low runtimes and an efficient analysis.
Many proof techniques for proving nontermination of programs [11, 18, 36, 46] work by proving that some loop is nonterminating and (often separately) proving that a witness of nontermination for this loop is reachable from an initial program configuration. This captures lassoshaped counterexamples to program termination. A lasso consists of a stem of straightline code (which could be expressed as a single update), followed by a loop with a single update in its body. Techniques for proving nontermination of loops that provide witnesses of nontermination can thus be lifted to techniques for lassos by checking reachability of the found witnesses of nontermination from an initial program configuration. While the presentation in this paper focuses on loops, our implementation in LoAT can also prove that the found certificate of nontermination for the loop is reachable from an initial program configuration. If a loop cannot be proven nonterminating, it can still be possible to accelerate it and use the accelerated loop as part of the stem of a lasso for another loop. Like this, LoAT can analyze programs with more complex control flow than just single loops.
Several techniques for proving aperiodic nontermination, i.e., nontermination of programs that do not have any lassoshaped counterexamples to termination, have been proposed [11, 14, 16, 35, 45]. By integrating our calculus into a suitable programanalysis framework [22, 27], it can be used to prove aperiodic nontermination as well.
Loop termination was recently proven to be decidable for the subclass of loops in which the guards and updates use only linear arithmetic and the guards are restricted to conjunctions of atoms [25, 40]. Our approach is less restrictive regarding the input loops: we allow for nonlinear guards and updates, and we allow for arbitrary Boolean structure in the guards. In future work, one could investigate the use of such decision procedures as conditional nontermination techniques in our calculus to make them applicable to larger classes of loops. For practical applications to larger programs, it is important to obtain a certificate of nontermination for a loop when proving its nontermination, corresponding to a large, ideally infinite set of witnesses of nontermination. The reason is that some witness of nontermination for the loop must be reachable from an initial program configuration so that the nontermination proof carries over from the loop to the input program. However, the decision procedures in [25, 40] are not optimized to this end: They produce a certificate of eventual nontermination, i.e., a formula that describes initial configurations that give rise to witnesses of nontermination by applying the loop body a finite, but unknown number of times. For example, the most general certificate of nontermination for the loop \(\mathcal {T}_{inc}\) is \(x>0\), whereas the most general certificate of eventual nontermination is \(\top \). The reason is that, for any initial valuation \(c\) of x (where c is a natural number), one obtains a witness of nontermination by applying the body of the loop \(c+1\) times while ignoring the loop condition. The problem of transforming a single witness of eventual nontermination into a witness of nontermination has been solved partially in [37]. The problem of transforming certificates of eventual nontermination that describe infinite sets of configurations into certificates of nontermination is, to the best of our knowledge, still open. In contrast, the conditional nontermination techniques presented in Sect. 7 aim to identify a potentially infinite set of witnesses of nontermination.
For a subclass of loops involving nonlinearity and arbitrary Boolean structures, decidability of termination has recently been proven, too [26]. However, the decidability proof from [26] only applies to loops where the variables range over \(\mathbb {R}\). For loops over \(\mathbb {Z}\), termination of such programs is trivially undecidable (due to Hilbert’s tenth problem).
BenAmram, Doménech, and Genaim [5] show a connection between the nonexistence of multiphaselinear ranking functions as termination arguments for linear loops and monotonic recurrent sets. A recurrent set
is monotonic if we have \(e_i(\vec {x}) \le e_i(\vec {a}(\vec {x}))\) for all \(i \in [1,m]\) and all \(\vec {x} \in R\). In particular, they propose a procedure that, if it terminates, returns either a multiphaselinear ranking function as a termination proof or a set of program states that could not be proven terminating. If the input loop has a linear update function with only integer coefficients and if the procedure returns a nonempty set of states that includes integer values, this set is a monotonic recurrent set and proves nontermination. This approach is implemented in the iRankFinder tool.
Leike and Heizmann [46] propose a method to find geometric nontermination arguments that allow for expressing the values of the variables after the \(n^{th}\) loop iteration. In this sense, their approach can also be seen as a use of loop acceleration to express a nontermination argument. This approach is implemented in the Ultimate tool. While our technique for loop acceleration also relies on closed forms, our technique for proving nontermination does not need closed forms. Hence our approach also applies to loops where closed forms cannot be computed, or contain subexpressions that make further analyses difficult, like the imaginary unit.
Finally, Frohn and Giesl [22] have already used loop acceleration for proving nontermination. However, they use loop acceleration (more specifically, Theorem 3) solely for proving reachability of nonterminating loops. To prove nontermination of loops, they used unconditional, standalone versions of Theorems 18 and 16. So their approach does not allow for combining different acceleration or nontermination techniques when analyzing loops. However, they already exploited similarities between nontermination proving and loop acceleration: Both their loop acceleration technique (Theorem 3) and their main nontermination technique (Theorem 16) are based on certain monotonicity properties of the loop condition. Starting from this observation, they developed a technique for deducing invariants that may be helpful for both proving nontermination and accelerating loops. This idea is orthogonal to our approach, which could, of course, be combined with techniques for invariant inference.
9 Implementation and experiments
We implemented our approach in our opensource Loop Acceleration Tool LoAT [22, 27]:
https://aprovedevelopers.github.io/LoAT
It uses Z3 [20] and Yices2 [19] to check implications and PURRS [2] to compute closed forms. While LoAT can synthesize formulas with nonpolynomial arithmetic, it cannot yet parse them, i.e., the input is restricted to polynomials. Moreover, LoAT does not yet support disjunctive loop conditions.
To evaluate our approach, we extracted 1511 loops with conjunctive guards from the category Termination of Integer Transition Systems of the Termination Problems Database [53], the benchmark collection which is used at the annual Termination and Complexity Competition [31], as follows:

1.
We parsed all examples with LoAT and extracted each singlepath loop with conjunctive guard (resulting in 3829 benchmarks).

2.
We removed duplicates by checking syntactic equality (resulting in 2825 benchmarks).

3.
We removed loops whose runtime is trivially constant using an incomplete check (resulting in 1733 benchmarks).

4.
We removed loops which do not admit any terminating runs, i.e., loops where Theorem 2 applies (resulting in 1511 benchmarks).
All tests have been run on StarExec [52] (Intel Xeon E52609, 2.40GHz, 264GB RAM [50]). For our benchmark collection, more details about the results of our evaluation, and a precompiled binary (Linux, 64 bit) we refer to [24].
9.1 Loop acceleration
For technical reasons, the closed forms computed by LoAT are valid only if \(n>0\), whereas Definition 2 requires them to be valid for all \(n \in \mathbb {N}\). The reason is that PURRS has only limited support for initial conditions. Thus, LoAT ’s results are correct only for all \(n>1\) (instead of all \(n > 0\)). Moreover, LoAT can currently compute closed forms only if the loop body is triangular, meaning that each \(a_i\) is an expression over \(x_1,\ldots ,x_i\). The reason is that PURRS cannot solve systems of recurrence equations, but only a single recurrence equation at a time. While systems of recurrence equations can be transformed into a single recurrence equation of higher order, LoAT does not yet implement this transformation. However, LoAT failed to compute closed forms for just 26 out of 1511 loops in our experiments, i.e., this appears to be a minor restriction in practice. Furthermore, the implementation of our calculus does not use conditional acceleration via metering functions. The reason is that we are not aware of examples where our monotonicitybased acceleration techniques fail, but our technique for finding metering functions (based on Farkas’ Lemma) may succeed.
Apart from these differences, our implementation closely follows the current paper. It applies the conditional acceleration techniques from Sects. 5 and 6 with the following priorities: Theorem 8 > Theorem 7 > Theorem 11 > Theorem 13.
We compared our implementation with LoAT ’s implementation of acceleration via monotonicity (Theorem 3, [22]) and its implementation of acceleration via metering functions (Theorem 4, [28]), which also incorporates the improvements proposed in [27]. We did not include the techniques from Theorems 1 and 2 in our evaluation, as they are subsumed by acceleration via monotonicity.
Furthermore, we compared with Flata [39], which implements the techniques to accelerate FMATs and octagonal relations discussed in Sect. 8. To this end, we used a straightforward transformation from LoAT ’s native input format^{Footnote 4} (which is also used in the category Complexity of Integer Transition Systems of the Termination and Complexity Competition) to Flata’s input format. Note that our benchmark collection contains 16 loops with nonlinear arithmetic where Flata is bound to fail, since it supports only linear arithmetic. We did not compare with FAST [3], which uses a similar approach as the more recent tool Flata. We used a wall clock timeout of 60 s per example and a memory limit of 128GB for each tool.
The results are shown in Table 1, where the information regarding the runtime includes all examples, not just solved examples. They show that our novel calculus was superior to the competing techniques in our experiments. In all but 7 cases where our calculus successfully accelerated the given loop, the resulting formula was polynomial. Thus, integrating our approach into existing accelerationbased verification techniques should not present major obstacles w.r.t. automation.
LoAT:  
Monot.:  Acceleration via Monotonicity, Theorem 3 
Meter:  Acceleration via Metering Functions, Theorem 4 
Flata:  
:  
:  
:  
Exact:  Examples that were accelerated exactly 
Approx:  Examples that were accelerated approximately 
Fail:  Examples that could not be accelerated 
Avg rt:  Average wall clock runtime 
Median rt:  Median wall clock runtime 
SD rt:  Standard deviation of wall clock runtime 
Furthermore, we evaluated the impact of our new acceleration techniques from Sect. 6 independently. To this end, we ran experiments with three configurations where we disabled acceleration via eventual increase, acceleration via eventual decrease, and both of them. The results are shown in Table 2. They show that our calculus does not improve over acceleration via monotonicity if both acceleration via eventual increase and acceleration via eventual decrease are disabled (i.e., our benchmark collection does not contain examples like \(\mathcal {T}_{2\text {}c\text {}invs}\)). However, enabling either acceleration via eventual decrease or acceleration via eventual increase resulted in a significant improvement. Interestingly, there are many examples that can be accelerated with either of these two techniques: When both of them were enabled, LoAT (exactly or approximately) accelerated 1482 loops. When only one of them was enabled, it accelerated 1444 and 1338 loops, respectively. But when none of them was enabled, it accelerated only 845 loops. We believe that this is due to examples like
where Theorems 11 and 13 are applicable (since \(x_1 \le x_2\) implies \(x_2 \le x_2\) and \(x_1 \ge x_2\) implies \(x_2 \ge x_2\)).
Flata exactly accelerated 49 loops where LoAT failed or approximated and LoAT exactly accelerated 262 loops where Flata failed. So there were only 18 loops where both Flata and the full version of our calculus failed to compute an exact result. Among them were the only 3 examples where our implementation found a closed form, but failed anyway. One of them was^{Footnote 5}
Here, the updated value of \(x_1\) depends on \(x_1\), the update of \(x_2\) depends on \(x_1\) and \(x_2\), and the update of \(x_3\) depends on \(x_2\) and \(x_3\). Hence, the closed form of \(x_1\) is linear, the closed form of \(x_2\) is quadratic, and the closed form of \(x_3\) is cubic:
So when \(x_1,x_2\), and \(x_3\) have been fixed, \(x_3^{(n)}\) has up to 2 extrema, i.e., its monotonicity may change twice. However, our techniques based on eventual monotonicity require that the respective expressions behave monotonically once they start to de or increase, so these techniques only allow one change of monotonicity.
This raises the question if our approach can accelerate every loop with conjunctive guard and linear arithmetic whose closed form is a vector of (at most) quadratic polynomials with rational coefficients. We leave this to future work.
9.2 Nontermination
To prove nontermination, our implementation applies the conditional nontermination techniques from Sect. 7 with the following priorities: Theorem 16 > Theorem 17 > Theorem 18. To evaluate our approach, we compared it with several leading tools for proving nontermination of integer programs: AProVE [30], iRankFinder [5], RevTerm [14], Ultimate [15], and VeryMax [45]. Note that AProVE uses, among other techniques, the tool T2 [13] as backend for proving nontermination, so we refrained from including T2 separately in our evaluation.
To compare with AProVE, RevTerm, and Ultimate, we transformed all examples into the format which is used in the category Termination of C Integer Programs of the Termination and Complexity Competition.^{Footnote 6} For iRankFinder and VeryMax, we transformed them into the format from the category Termination of Integer Transition Systems of the Termination and Complexity Competition [12]. The latter format is also supported by LoAT, so besides iRankFinder and VeryMax, we also used it to evaluate LoAT.
For the tools iRankFinder, Ultimate, and VeryMax, we used the versions of their last participations in the Termination and Complexity Competition (2019 for VeryMax and 2021 for iRankFinder and Ultimate), as suggested by the developers. For AProVE, the developers provided an uptodate version. For RevTerm, we followed the build instruction from [32] and sequentially executed the following command lines, as suggested by the developers:
RevTerm.sh prog.clinear part1 mathsat 2 1
RevTerm.sh prog.clinear part2 mathsat 2 1 It is important to note that all tools but RevTerm and LoAT also try to prove termination. Thus, a comparison between the runtimes of LoAT and those other tools is of limited significance. Therefore, we also compared LoAT with configurations of the tools that only try to prove nontermination. For AProVE, such a configuration was kindly provided by its developers (named AProVE NT in Table 3). In the case of iRankFinder, the excellent documentation allowed us to easily build such a configuration ourselves (named iRankFinder NT in Table 3). In the case of Ultimate, its developers pointed out that a comparison w.r.t. runtime is meaningless, as it is dominated by Ultimate’s startuptime of \({\sim } 10\) s on small examples. For VeryMax, it is not possible to disable terminationproving, according to its authors.
We again used a wall clock timeout of 60 s and a memory limit of 128 GB for each tool. For RevTerm, we used a timeout of 30 s for the first invocation, and the remaining time for the second invocation.
The results are shown in Table 3. They show that our novel calculus is competitive with stateoftheart tools. Both iRankFinder and Ultimate can prove nontermination of precisely the same 205 examples. LoAT can prove nontermination of these examples, too. In addition, it solves one benchmark that cannot be handled by any other tool:^{Footnote 7}
Most likely, the other tools fail for this example due to the presence of nonlinear arithmetic. Our calculus from Sect. 7 just needs to check implications, so as long as the underlying SMTsolver supports nonlinearity, it can be applied to nonlinear examples, too. It is worth mentioning that LoAT subsumes all other tools w.r.t. proving nontermination. There are only 4 examples where none of the tools can prove termination or nontermination. Termination of one of these examples can be proven by an experimental, unpublished module of LoAT, which is inspired by the calculi presented in this paper. A manual investigation revealed that the 3 remaining examples are terminating, too.
To investigate the impact of different nontermination techniques, we also tested configurations where one of the nontermination techniques from Theorems 16 and 18 was disabled, respectively. The results are shown in Table 4. First, note that disabling Theorem 16 does not reduce LoAT ’s power. The reason is that if the lefthand side p of an inequation \(p>0\) is monotonically increasing (such that Theorem 16 applies), then it is also eventually monotonically increasing (such that Theorem 17 applies). However, since Theorem 16 yields simpler certificates of nontermination than Theorem 17, LoAT still uses both techniques. Interestingly, and are almost equally powerful: Without Theorem 17, LoAT still proves nontermination of 203 examples and without Theorem 18, LoAT solves 205 examples. Presumably, the reason is again due to examples like (31), where Theorem 17 finds the recurrent set \(x_2 > 0\) and Theorem 18 finds the recurrent set \(x_1 > 0 \wedge x_1 = x_2\). So even though both nontermination techniques are applicable in such cases, the recurrent set deduced via Theorem 17 is clearly more general and thus preferable in practice. Note that LoAT cannot solve a single example when both Theorems 17 and 18 are disabled ( in Table 4). Then, the only remaining nontermination technique is NonTermination via Monotonic Increase. Examples where this technique suffices to prove nontermination trivially diverge whenever the loop condition is satisfied, and hence they were filtered from our benchmark set (as explained at the beginning of Sect. 9).
LoAT:  Calculus from Section 7 
AProVE:  
AProVE NT:  Configuration of AProVE that does not try to prove termination 
iRankFinder:  
iRankFinder NT:  Configuration of iRankFinder that does not try to prove termination 
RevTerm:  
Ultimate:  https://monteverdi.informatik.unifreiburg.de/tomcat/Website 
VeryMax:  
:  
:  
:  
:  
NO:  Number of nontermination proofs 
YES:  Number of termination proofs 
Fail:  Number of examples where (non)termination could not be proven 
Avg rt:  Average wall clock runtime 
Avg rt NO:  Average wall clock runtime when nontermination was proven 
Median rt:  Median wall clock runtime 
Median rt NO:  Median wall clock runtime when nontermination was proven 
SD rt:  Standard deviation of wall clock runtime 
SD rt NO:  Standard deviation of wall clock runtime when nontermination was proven 
Regarding the runtime, we think that LoAT is faster than the competing tools due to the fact that the technique presented in Sect. 7 requires very little search, whereas many other nontermination techniques are heavily searchbased (e.g., due to the use of templates, as it is exercised by RevTerm). In our setting, the inequations that eventually constitute a certificate of nontermination immediately arise from the given loop. In this regard, iRankFinder’s approach for proving nontermination is similar to ours, as it also requires little search. This is also reflected in our experiments, where iRankFinder is the second fastest tool.
It should also be taken into account that iRankFinder is implemented in Python, AProVE, RevTerm, and Ultimate are implemented in Java, and LoAT and VeryMax are implemented in . Thus, the difference in runtime is in parts due to different performances of the respective programming language implementations.
Another interesting aspect of our evaluation is the result of RevTerm, which outperformed all other tools in the evaluation of [14]. The reason for this discrepancy is the following: In [14], 300 different configurations of RevTerm have been tested, and a benchmark has been considered to be solved if at least one of these configurations was able to prove nontermination. In contrast, we ran two configurations of RevTerm, one for each of the two nontermination checks proposed in [14]. So essentially, RevTerm’s results from [14] correspond to a highly parallel setting, whereas the results from our evaluation correspond to a sequential setting.
10 Conclusion and future work
We discussed existing acceleration techniques (Sect. 3) and presented a calculus to combine acceleration techniques modularly (Sect. 4). Then, we showed how to combine existing (Sect. 5) and two novel (Sect. 6) acceleration techniques with our calculus. This improves over prior approaches, where acceleration techniques were used independently, and may thus improve accelerationbased verification techniques [8, 9, 22, 27, 29, 44] in the future. An empirical evaluation (Sect. 9.1) shows that our approach is more powerful than stateoftheart acceleration techniques. Moreover, if it is able to accelerate a loop, then the result is exact (instead of just an underapproximation) in most cases. Thus, our calculus can be used for underapproximating techniques (e.g., to find bugs or counterexamples) as well as in overapproximating settings (e.g., to prove safety or termination).
Furthermore, we showed how our calculus from Sect. 4 can be adapted for proving nontermination in Sect. 7, where we also presented three nontermination techniques that can be combined with our novel calculus. While two of them (Theorems 16 and 18) are straightforward adaptions of existing nontermination techniques to our modular setting, the third one (Theorem 17) is, to the best of our knowledge, new and might also be of interest independently from our calculus.
Actually, the two calculi presented in this paper are so similar that they do not require separate implementations. In our tool LoAT, both of them are implemented together, such that we can handle loops uniformly: If our implementation of the calculi yields a certificate of nontermination, then it suffices to prove reachability of one of the corresponding witnesses of nontermination from an initial program state afterwards to finish the proof of nontermination. If our implementation of the calculi successfully accelerates the loop under consideration, this may help to prove reachability of other, potentially nonterminating configurations later on. If our implementation of the calculi fails, then LoAT continues its search with other program paths. The success of this strategy is demonstrated at the annual Termination and Complexity Competition, where LoAT has been the most powerful tool for proving nontermination of Integer Transition Systems since its first participation in 2020.
Regarding future work, we are actively working on support for disjunctive loop conditions. Moreover, our experiments indicate that integrating specialized techniques for FMATs (see Sect. 8) would improve the power of our approach for loop acceleration, as Flata exactly accelerated 49 loops where LoAT failed to do so (see Sect. 9). Furthermore, we plan to extend our approach to richer classes of loops, e.g., loops operating on both integers and arrays, nondeterministic loops, or loops operating on bitvectors (as opposed to mathematical integers).
Notes
i.e., the (anonymous) function that maps \(\vec {x}\) to \(\vec {a}\).
While there are also overapproximating acceleration techniques (see Sect. 8.1), in this paper we are interested only in underapproximations.
We call a formula \(\delta \) a loop invariant of a loop \(\mathcal {T}_{loop}\) if \(\varphi (\vec {x}) \wedge \delta (\vec {x}) \implies \delta (\vec {a}(\vec {x}))\) is valid.
The other two are structurally similar, but more complex.
1567523105272726.koat.smt2
References
Albert, E., Genaim, S., MartinMartin, E., Merayo, A., Rubio, A.: Lowerbound synthesis using loop specialization and maxSMT. In: CAV ’21. pp. 863–886. LNCS 12760 (2021). https://doi.org/10.1007/9783030816889_40
Bagnara, R., Pescetti, A., Zaccagnini, A., Zaffanella, E.: PURRS: towards computer algebra support for fully automatic worstcase complexity analysis (2005). https://arxiv.org/abs/cs/0512056 [cs.MS]
Bardin, S., Finkel, A., Leroux, J., Petrucci, L.: FAST: acceleration from theory to practice. Int. J. Softw. Tools Technol. Transf. 10(5), 401–424 (2008). https://doi.org/10.1007/s1000900800643
Bardin, S., Finkel, A., Leroux, J., Schnoebelen, P.: Flat acceleration in symbolic model checking. In: ATVA ’05. pp. 474–488. LNCS 3707 (2005). https://doi.org/10.1007/11562948_35
BenAmram, A.M., Doménech, J.J., Genaim, S.: Multiphaselinear ranking functions and their relation to recurrent sets. In: SAS ’19. pp. 459–480. LNCS 11822 (2019). https://doi.org/10.1007/9783030323042_22
Boigelot, B.: Symbolic methods for exploring infinite state spaces. Ph.D. thesis, Université de Liège (1999). https://orbi.uliege.be/bitstream/2268/74874/1/Boigelot98.pdf
Boigelot, B.: On iterating linear transformations over recognizable sets of integers. Theor. Comput. Sci. 309(1–3), 413–468 (2003). https://doi.org/10.1016/S03043975(03)003141
Bozga, M., Gîrlea, C., Iosif, R.: Iterating octagons. In: TACAS ’09. pp. 337–351. LNCS 5505 (2009). https://doi.org/10.1007/9783642007682_29
Bozga, M., Iosif, R., Konečný, F.: Fast acceleration of ultimately periodic relations. In: CAV ’10. pp. 227–242. LNCS 6174 (2010). https://doi.org/10.1007/9783642142956_23
Bozga, M., Iosif, R., Konečný, F.: Deciding conditional termination. Log. Methods Comput. Sci. 10, 3 (2014). https://doi.org/10.2168/LMCS10(3:8)2014
Brockschmidt, M., Ströder, T., Otto, C., Giesl, J.: Automated detection of nontermination and NullPointerExceptions for Java Bytecode. In: FoVeOOS ’11. pp. 123–141. LNCS 7421 (2012). https://doi.org/10.1007/9783642317620_9
Brockschmidt, M., Rybalchenko, A.: TermComp proposal: pushdown systems as a model for programs with procedures (2014). https://www.microsoft.com/enus/research/wpcontent/uploads/2016/02/SMTPushdownPrograms.pdf
Brockschmidt, M., Cook, B., Ishtiaq, S., Khlaaf, H., Piterman, N.: T2: temporal property verification. In: TACAS ’16. LNCS 9636 (2016), pp. 387–393. https://doi.org/10.1007/9783662496749_22
Chatterjee, K., Goharshady, E.K., Novotný, P., Zikelic, D.: Proving nontermination by program reversal. In: PLDI ’21. pp. 1033–1048 (2021). https://doi.org/10.1145/3453483.3454093
Chen, Y., Heizmann, M., Lengál, O., Li, Y., Tsai, M., Turrini, A., Zhang, L.: Advanced automatabased algorithms for program termination checking. In: PLDI ’18. pp. 135–150 (2018). https://doi.org/10.1145/3192366.3192405
Chen, H., Cook, B., Fuhs, C., Nimkar, K., O’Hearn, P.W.: Proving nontermination via safety. In: TACAS ’14. pp. 156–171. LNCS 8413 (2014). https://doi.org/10.1007/9783642548628_11
Comon, H., Jurski, Y.: Multiple counters automata, safety analysis and Presburger arithmetic. In: CAV ’98. pp. 268–279. LNCS 1427 (1998). https://doi.org/10.1007/BFb0028751
Cook, B., Fuhs, C., Nimkar, K., O’Hearn, P.W.: Disproving termination with overapproximation. In: FMCAD ’14. pp. 67–74 (2014). https://doi.org/10.1109/FMCAD.2014.6987597
Dutertre, B.: Yices 2.2. In: CAV ’14. LNCS 8559 (2014), pp. 737–744. https://doi.org/10.1007/9783319088679_49
de Moura, L., Bjørner, N.: Z3: an efficient SMT solver. In: TACAS ’08. pp. 337–340. LNCS 4963 (2008). https://doi.org/10.1007/9783540788003_24
Farzan, A., Kincaid, Z.: Compositional recurrence analysis. In: FMCAD ’15. pp. 57–64 (2015). https://doi.org/10.1109/FMCAD.2015.7542253
Frohn, F., Giesl, J.: Proving nontermination via loop acceleration. In: FMCAD ’19. pp. 221–230 (2019). https://doi.org/10.23919/FMCAD.2019.8894271
Frohn, F.: A calculus for modular loop acceleration. In: TACAS ’20. pp. 58–76. LNCS 12078 (2020). https://doi.org/10.1007/9783030451905_4
Frohn, F., Fuhs, C.: Empirical evaluation of “A calculus for modular loop acceleration (and nontermination proofs)” (2022). https://ffrohn.github.io/accelerationcalculus
Frohn, F., Giesl, J.: Termination of triangular integer loops is decidable. In: CAV ’19. LNCS 11562 (2019), pp. 426–444. https://doi.org/10.1007/9783030255435_24
Frohn, F., Hark, M., Giesl, J.: Termination of polynomial loops. In: SAS ’20. pp. 89–112. LNCS 12389 (2020). https://doi.org/10.1007/9783030654740_5
Frohn, F., Naaf, M., Brockschmidt, M., Giesl, J.: Inferring lower runtime bounds for integer programs. ACM Trans. Program. Lang. Syst. 42(3), 13:113:50 (2020). https://doi.org/10.1145/3410331
Frohn, F., Naaf, M., Hensel, J., Brockschmidt, M., Giesl, J.: Lower runtime bounds for integer programs. In: IJCAR ’16. pp. 550–567. LNCS 9706 (2016). https://doi.org/10.1007/9783319402291_37
Ganty, P., Iosif, R., Konečný, F.: Underapproximation of procedure summaries for integer programs. Int. J. Softw. Tools Technol. Transf. 19(5), 565–584 (2017). https://doi.org/10.1007/s1000901604207
Giesl, J., Aschermann, C., Brockschmidt, M., Emmes, F., Frohn, F., Fuhs, C., Hensel, J., Otto, C., Plücker, M., SchneiderKamp, P., Ströder, T., Swiderski, S., Thiemann, R.: Analyzing program termination and complexity automatically with AProVE. J. Autom. Reason. 58(1), 3–31 (2017). https://doi.org/10.1007/s108170169388y
Giesl, J., Rubio, A., Sternagel, C., Waldmann, J., Yamada, A.: The termination and complexity competition. In: TACAS ’19. pp. 156–166. LNCS 11429 (2019). https://doi.org/10.1007/9783030175023_10
Goharshady, E.K.: RevTerm on GitHub (2021). https://github.com/ekgma/RevTerm
Gonnord, L., Halbwachs, N.: Combining widening and acceleration in linear relation analysis. In: SAS ’06. pp. 144–160. LNCS 4134 (2006). https://doi.org/10.1007/11823230_10
Gonnord, L., Schrammel, P.: Abstract acceleration in linear relation analysis. Sci. Comput. Program. 93, 125–153 (2014). https://doi.org/10.1016/j.scico.2013.09.016
Gulwani, S., Srivastava, S., Venkatesan, R.: Program analysis as constraint solving. In: PLDI ’08. pp. 281–292 (2008). https://doi.org/10.1145/1375581.1375616
Gupta, A., Henzinger, T.A., Majumdar, R., Rybalchenko, A., Xu, R.: Proving nontermination. In: POPL ’08. pp. 147–158 (2008). https://doi.org/10.1145/1328438.1328459
Hark, M., Frohn, F., Giesl, J.: Polynomial loops: beyond termination. In: LPAR ’20. pp. 279–297. EPiC Series in Computing 73 (2020). https://doi.org/10.29007/nxv1
Hojjat, H., Iosif, R., Konečný, F., Kuncak, V., Rümmer, P.: Accelerating interpolants. In: ATVA ’12. pp. 187–202. LNCS 7561 (2012). https://doi.org/10.1007/9783642333866_16
Hojjat, H., Konečný, F., Garnier, F., Iosif, R., Kuncak, V., Rümmer, P.: A verification toolkit for numerical transition systems  tool paper. In: FM ’12. pp. 247–251. LNCS 7436 (2012). https://doi.org/10.1007/9783642327599_21
Hosseini, M., Ouaknine, J., Worrell, J.: Termination of linear loops over the integers. In: ICALP ’19. pp. 118:1–118:13. LIPIcs 132 (2019). https://doi.org/10.4230/LIPIcs.ICALP.2019.118
Jeannet, B., Schrammel, P., Sankaranarayanan, S.: Abstract acceleration of general linear loops. In: POPL ’14. pp. 529–540 (2014). https://doi.org/10.1145/2535838.2535843
Kincaid, Z., Breck, J., Boroujeni, A.F., Reps, T.W.: Compositional recurrence analysis revisited. In: PLDI ’17. pp. 248–262 (2017). https://doi.org/10.1145/3062341.3062373
Konečný, F.: PTIME computation of transitive closures of octagonal relations. In: TACAS ’16. LNCS 9636 (2016), pp. 645–661. https://doi.org/10.1007/9783662496749_42
Kroening, D., Lewis, M., Weissenbacher, G.: Underapproximating loops in C programs for fast counterexample detection. Formal Methods Syst. Des. 47(1), 75–92 (2015). https://doi.org/10.1007/s1070301502281
Larraz, D., Nimkar, K., Oliveras, A., RodríguezCarbonell, E., Rubio, A.: Proving nontermination using maxSMT. In: CAV ’14. LNCS 8559 (2014), pp. 779–796. https://doi.org/10.1007/9783319088679_52
Leike, J., Heizmann, M.: Geometric nontermination arguments. In: TACAS ’18. pp. 266–283. LNCS 10806 (2018). https://doi.org/10.1007/9783319899633_16
Madhukar, K., Wachter, B., Kroening, D., Lewis, M., Srivas, M.K.: Accelerating invariant generation. In: FMCAD ’15. pp. 105–111 (2015). https://doi.org/10.1109/FMCAD.2015.7542259
Ouaknine, J., Pinto, J.S., Worrell, J.: On termination of integer linear loops. In: SODA ’15. pp. 957–969 (2015). https://doi.org/10.1137/1.9781611973730.65
Silverman, J., Kincaid, Z.: Loop summarization with rational vector addition systems. In: CAV ’19. LNCS 11562 (2019), pp. 97–115. https://doi.org/10.1007/9783030255435_7
StarExec hardware specifications (2022). https://www.starexec.org/starexec/public/machinespecs.txt
Strejcek, J., Trtík, M.: Abstracting path conditions. In: ISSTA ’12. pp. 155–165 (2012). https://doi.org/10.1145/2338965.2336772
Stump, A., Sutcliffe, G., Tinelli, C.: StarExec: a crosscommunity infrastructure for logic solving. In: IJCAR ’14. pp. 367–373. LNCS 8562 (2014). https://doi.org/10.1007/9783319085876_28
Termination problems data base (TPDB). http://terminationportal.org/wiki/TPDB
Urban, C., Gurfinkel, A., Kahsai, T.: Synthesizing ranking functions from bits and pieces. In: TACAS ’16. LNCS 9636 (2016), pp. 54–70. https://doi.org/10.1007/9783662496749_4
Velroyen, H., Rümmer, P.: Nontermination checking for imperative programs. In: TAP ’08. pp. 154–170. LNCS 4966 (2008). https://doi.org/10.1007/9783540791249_11
Acknowledgements
We thank Marcel Hark, Sophie Tourret, and the anonymous reviewers for helpful feedback and comments. Moreover, we thank Jera Hensel for her help with AProVE, Radu Iosif and Filip Konečný for their help with Flata, Samir Genaim for his help with iRankFinder, Matthias Heizmann for his help with Ultimate, Ehsan Goharshady for his help with RevTerm, and Albert Rubio for his help with VeryMax. This work has been funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)—235950644 (Project GI 274/62), and by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)—389792660 as part of TRR 248 (see https://perspicuouscomputing.science).
Funding
Open Access funding enabled and organized by Projekt DEAL.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Frohn, F., Fuhs, C. A calculus for modular loop acceleration and nontermination proofs. Int J Softw Tools Technol Transfer 24, 691–715 (2022). https://doi.org/10.1007/s10009022006702
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10009022006702