Geometric Nontermination Arguments

We present a new kind of nontermination argument, called geometric nontermination argument. The geometric nontermination argument is a finite representation of an infinite execution that has the form of a sum of several geometric series. For so-called linear lasso programs we can decide the existence of a geometric nontermination argument using a nonlinear algebraic $\exists$-constraint. We show that a deterministic conjunctive loop program with nonnegative eigenvalues is nonterminating if an only if there exists a geometric nontermination argument. Furthermore, we present an evaluation that demonstrates that our method is feasible in practice.


Introduction
The problem whether a program is terminating is undecidable in general.One way to approach this problem in practice is to analyze the existence of termination arguments and nontermination arguments.The existence of a certain termination argument like, e.g, a linear ranking function, is decidable [28,3] and implies termination.However, if we cannot find a linear ranking function we cannot conclude nontermination.Vice versa, the existence of a certain nontermination argument like, e.g, a linear recurrence set [18], is decidable and implies nontermination however, if we cannot find such a recurrence set we cannot conclude termination.
In this paper we present a new kind of termination argument which we call geometric nontermination argument (GNTA).Unlike a recurrence set, a geometric nontermination argument does not only imply nontermination, it also explicitly represents an infinite program execution.An infinite program execution that is represented by a geometric nontermination argument can be written as a pointwise sum of several geometric series.We show that such an infinite execution exists for each deterministic conjunctive loop program that is nonterminating and whose transition matrix has only nonnegative eigenvalues.
We restrict ourselves to linear lasso programs.A lasso program consists of a single while loop that is preceded by straight-line code.The name refers to the lasso shaped form of the control flow graph.Usually, linear lasso programs do not occur as stand-alone programs.Instead, they are used as a finite representation of an infinite path in a control flow graph.For example, in (potentially spurious) counterexamples in termination analysis [14,6,19,22,23,29,30,20], stability analysis [10,31], cost analysis [1,17], or the verification of temporal properties [13,12,16] for programs.(c) Fig. 1: Three nonterminating linear lasso programs.Each has an infinite execution which is either a geometric series or a pointwise sum of geometric series.
The first lasso program is nondeterministic because the variable b gets some nondeterministic value in each iteration.
We present a constraint based approach that allow us to check whether a linear conjunctive lasso program has a geometric nontermination argument and to synthesize one if it exists.
Our analysis is motived by the probably simplest form of an infinite executions, namely infinite execution where the same state is always repeated.We call such a state a fixed point.For lasso programs we can reduce the check for the existence of a fixed point to a constraint solving problem as follows.Let us assume that the stem and the loop of the lasso program are given as a formulas over primed and unprimed variables STEM(x, x ′ ) and LOOP(x, x ′ ).The infinite sequence s 0 , s, s, s, . . . is an nonterminating execution of the lasso program iff the assignment x 0 → s 0 , x → s is a satisfying assignment for the constraint STEM(x 0 , x) ∧ LOOP( x, x).In this paper, we present a constraint that is not only satisfiable if the program has a fixed point, it is also satisfiable if the program has a nonterminating execution that can be written as a pointwise sum of geometric series.
Let us motivate the representation of infinite executions as sums of geometric series in three steps.The program depicted in Figure 1a shows a lasso program which does not have a fixed point but the following infinite execution.
) , ( 7 1 ) , ( 22 1 ) , ( 67 1 ) , . . .We can write this infinite execution as a a geometric series where for k > 1 the k-th state is the sum x 1 + k i=0 λ i y, where we have x 1 = ( 21 ), y = ( 5 0 ), and λ = 2.The state x 1 is the state before the loop was executed before the first time and intuitively y is the direction in which the execution is moving initially and λ is the speed at which the execution continues to move in this direction.
Next, let us consider the lasso program depicted in Figure 1b which has the following infinite execution.
We cannot write this execution as a geometric series as we did above.Intuitively, the reason is that the values of both variables are increasing at different speeds and hence this execution is not moving in a single direction.However, we can write this infinite execution as a sum of geometric series where for k > 1 the k-th state can be written as a sum ), Y = 2 0 0 1 , λ 1 = 3, λ 2 = 2 and 1 denotes the column vector of ones.
Intuitively, our execution is moving in two different directions at different speeds.
The directions are reflected by the column vectors of Y , the values of λ 1 and λ 2 reflect the respective speeds.
Let us next consider the lasso program in Figure 1c which has the following infinite execution. ( ) , ( 10 2 ) , ( 32 4 ) , ( 100 8 ) , . . .We cannot write this execution as a pointwise sum of geometric series in the form that we used above.Intuitively, the problem is that one of the initial directions contributes at two different speeds to the overall progress of the execution.However, we can write this infinite execution as a pointwise sum of geometric series where for k > 1 the k-th state can be written as a sum where we have the column vector of ones.We call the tuple (x 0 , x 1 , Y, λ 1 , λ 2 , µ) which we use as a finite representation for the infinite execution a geometric nontermination argument.
In this paper, we formally introduce the notion of a geometric nontermination argument for linear lasso programs (Section 3) and we prove that each nonterminating deterministic conjunctive linear loop program whose transition matrix has only nonnegative real eigenvalues has a geometric nontermination argument, i.e., each such nonterminating linear loop program has an infinite execution which can be written as a sum of geometric series (Section 4).

Preliminaries
We denote vectors x with bold symbols and matrices with uppercase Latin letters.Vectors are always understood to be column vectors, 1 denotes a vector of ones, 0 denotes a vector of zeros (of the appropriate dimension), and e i denotes the i-th unit vector.A list of notation can be found on page 18.

Linear Lasso Programs
In this work, we consider linear lasso programs, programs that consist of a program step and a single loop.We use binary relations over the program's states to define the stem and the loop transition relation.Variables are assumed to be real-valued.
We denote by x the vector of n variables (x 1 , . . ., x n ) T ∈ R n corresponding to program states, and by x ′ = (x ′ 1 , . . ., x ′ n ) T ∈ R n the variables of the next state.
Definition 1 (Linear Lasso Program).A (conjunctive) linear lasso program L = (STEM, LOOP) consists of two binary relations defined by formulas with the free variables x and x ′ of the form for some matrix A ∈ R n×m and some vector b ∈ R m .
A linear loop program is a linear lasso program L without stem, i.e., a linear lasso program such that the relation STEM is equivalent to true.
Definition 2 (Deterministic Linear Lasso Program).A linear loop program L is called deterministic iff its loop transition LOOP can be written in the following form for some matrices G ∈ R n×m , M ∈ R n×n , and vectors g ∈ R m and m ∈ R n .Definition 3 (Nontermination).A linear lasso program L is nonterminating iff there is an infinite sequence of states x 0 , x 1 , . .., called an infinite execution of L, such that (x 0 , x 1 ) ∈ STEM and (x t , x t+1 ) ∈ LOOP for all t ≥ 1.

Jordan Normal Form
Let M ∈ R n×n be a real square matrix.If there is an invertible square matrix S and a diagonal matrix D such that M = SDS −1 , then M is called diagonalizable.The column vectors of S form the basis over which M has diagonal form.In general, real matrices are not diagonalizable.However, every real square matrix M with real eigenvalues has a representation which is almost diagonal, called Jordan normal form.This is a matrix that is zero except for the eigenvalues on the diagonal and one superdiagonal containing ones and zeros.
Formally, a Jordan normal form is a matrix J = diag(J i1 (λ 1 ), . . ., J i k (λ k )) where λ 1 , . . ., λ k are the eigenvalues of M and the real square matrices J i (λ) ∈ R i×i are Jordan blocks, The subspace corresponding to each distinct eigenvalue is called generalized eigenspace and their basis vectors generalized eigenvectors.
Theorem 4 (Jordan Normal Form).For each real square matrix M ∈ R n×n with real eigenvalues, there is an invertible real square matrix V ∈ R n×n and a Jordan normal form J ∈ R n×n such that M = V JV −1 .

Geometric Nontermination Arguments
Fix a conjunctive linear lasso program L = (STEM, LOOP) and let A ∈ R n×m and b ∈ R m define the loop transition such that ) is called a geometric nontermination argument for the linear lasso program L = (STEM, LOOP) iff all of the following statements hold. (domain) The existence of a geometric nontermination argument can be checked using an SMT solver.The constraints given by (domain), (init), (point), (ray) are nonlinear algebraic constraints.The satisfiability of these constraints is decidable.Moreover, if the linear lasso program is given as a deterministic update, we can compute its eigenvalues.If the eigenvalues are known, we can assign values to λ 1 , . . ., λ k and the constraints become linear and can thus be decided efficiently.
Proposition 6 (Soundness).If there is a geometric nontermination argument for a linear lasso program L, then L is nonterminating.
Proof.Define Y := (y 1 . . .y k ) denote the matrix containing the vectors y i as columns, and define the matrix Following Definition 3 we show that the linear lasso program L has the infinite execution From (init) we get (x 0 , x 1 ) ∈ STEM.It remains to show that According to (domain) the matrix U has only nonnegative entries, so the same holds for the matrix Z := t−1 j=0 U j .Hence Z1 has only nonnegative entries and thus Y Z1 can be written as k i=1 α i y i for some α i ≥ 0. We multiply the inequality number i from (ray) with α i and get where we use the convenience notation y 0 := 0 and µ 0 := 0. Now we sum (4) for all i and add (point) to get By definition of α i , we have and Therefore (3) and ( 5) are the same, which concludes this proof.

⊓ ⊔
Example 7 (Closed Form of the Infinite Execution).The following is the closed form of the state x t = t k=0 Y U k 1 in the infinite execution (2).Let U =: N + D where N is a nilpotent matrix and D is a diagonal matrix.

Completeness
First we show that a linear loop program has a GNTA if it has is a bounded infinite execution.In the next section we use this to prove our completeness result.

Bounded Infinite Executions
Hence the sequence (z k ) k≥1 is a Cauchy sequence and thus converges to some z * ∈ R n .We will show that z * is the desired fixed point.For all t, the polyhedron Together with we infer and since Q is topologically closed we have Note that Lemma 8 does not transfer to lasso programs: there might only be one fixed point and the stem might exclude this point (e.g., a = −0.5 and b = 3.5 in example Figure 1a).
Because fixed points give rise to trivial geometric nontermination arguments, we can derive a criterion for the existence of geometric nontermination arguments from Lemma 8. Proof.By Lemma 8 there is a fixed point x * such that (x * , x * ) ∈ LOOP.We choose x 0 = x 1 = x * which satisfies (point) and (ray) and thus is a geometric nontermination argument for L.
⊓ ⊔ Example 10.Note that according to our definition of a linear lasso program, the relation LOOP is a topologically closed set.If we allowed the formula defining LOOP to also contain strict inequalities, Lemma 8 no longer holds: the following program is nonterminating and has a bounded infinite execution, but it does not have a fixed point.However, the topological closure of the relation LOOP contains the fixed point a = 0.

Nonnegative Eigenvalues
This section is dedicated to the proof of the following completeness result for deterministic linear loop programs.
Theorem 11 (Completeness).If a deterministic linear loop program L of the form while (Gx ≤ g) do x := M x + m with n variables is nonterminating and M has only nonnegative real eigenvalues, then there is a geometric nontermination argument for L of size at most n.
To prove this completeness theorem, we need to construct a GNTA from a given infinite execution.The following lemma shows that we can restrict our construction to exclude all linear subspaces that have a bounded execution.
Lemma 12 (Loop Disassembly).Let L = (true, LOOP) be a linear loop program over R n = U ⊕ V where U and V are linear subspaces of R n .Suppose L is nonterminating and there is an infinite execution that is bounded when projected to the subspace U. Let x U be the fixed point in U that exists according to Lemma 8. Then the linear loop program L V that we get by projecting to the subspace V + x U is nonterminating.Moreover, if L V has a GNTA of size k, then L has a GNTA of size k.
Proof.Without loss of generality, we are in the basis of U and V so that these spaces are nicely separated by the use of different variables.Using the infinite execution of L that is bounded on U we can do the construction from the proof of Lemma 8 to get an infinite execution z 0 , z 1 , . . .that yields the fixed point x U when projected to U. We fix x U in the loop transition by replacing all variables from U with the values from x U and get the linear loop program L V (this is the projection to V + x U ). Importantly, the projection of z 0 , z 1 , . . . to V + x U is still an infinite execution, hence the loop L V is nonterminating.Given a GNTA for L V we can construct a GNTA for L by adding the vector x U to x 0 and x 1 .⊓ ⊔

Proof (of Theorem 11). The polyhedron corresponding to loop transition of the deterministic linear loop program
Define Y to be the convex cone spanned by the rays of the guard polyhedron: Let U denote the direct sum of the generalized eigenspaces for the eigenvalues 0 ≤ λ < 1.Any infinite execution is necessarily bounded on the subspace U since on this space the map x → M x+m is a contraction.Let U ⊥ denote the subspace of R n orthogonal to U. The space Y ∩ U ⊥ is a linear subspace of R n and any infinite execution in its complement is bounded.Hence we can turn our analysis to the subspace Y ∩ U ⊥ + x for some x ∈ Y ⊥ ⊕ U for the rest of the proof according to Lemma 12. From now on, we implicitly assume that we are in this space without changing any of the notation.
Part 1.In this part we show that there is a basis y 1 , . . ., y k ∈ Y such that M turns into a matrix U of the form given in (1) with λ 1 , . . ., λ k , µ 1 , . . ., µ k−1 ≥ 0. Since we allow µ i to be positive between different eigenvalues (Example 14 illustrates why), this is not necessarily a Jordan normal form and the vectors y i are not necessarily generalized eigenvectors.We choose a basis v 1 , . . ., v k such that M is in Jordan normal form with the eigenvalues ordered by size such that the largest eigenvalues come first.Define V 1 := Y ∩ U ⊥ and let V 1 ⊃ . . .⊃ V k be a strictly descending chain of linear subspaces where V i is spanned by v i , . . ., v k .
We define a basis w 1 , . . ., w k by doing the following for each Jordan block of M , starting with i = 1.Let M (i) be the projection of M to the linear subspace V i and let λ be the largest eigenvalues of M (i) .The m-fold iteration of a Jordan block J ℓ (λ) for m ≥ ℓ is given by Let z 0 , z 1 , z 2 , . . .be an infinite execution of the loop L in the basis v i , . . ., v k projected to the space V i .Since by Lemma 12 we can assume that there are no fixed points on this space, |z t | → ∞ as t → ∞ in each of the top ℓ components.Asymptotically, the largest eigenvalue λ dominates and in each row of J k (λ i ) m (7), the entries m j λ m−j in the rightmost column grow the fastest with an asymptotic rate of Θ(m j exp(m)).Therefore the sign of the component corresponding to basis vector v i+ℓ determines whether the top ℓ entries tend to +∞ or −∞, but the top ℓ entries of z t corresponding to the top Jordan block will all have the same sign eventually.Because no state can violate the guard condition we have that the guard cannot constraint the infinite execution in the direction of v j or −v j , i.e., G Vi v j ≤ 0 for each i ≤ j ≤ i + ℓ or G Vi v j ≥ 0 for each i ≤ j ≤ i + ℓ, where G Vi is the projection of G to the subspace V i .So without loss of generality the former holds (otherwise we use −v j instead of v j for i ≤ j ≤ i + ℓ) and for i ≤ j ≤ i + ℓ we get v j ∈ Y + V ⊥ i where V ⊥ i is the space spanned by v 1 , . . ., v i−1 .Hence there is a u j ∈ V ⊥ i such that w j := v j + u j is an element of Y. Now we move on to the subspace V i+ℓ+1 , discarding the top Jordan block.
Let T be the matrix M written in the basis w 1 , . . ., w k .Then T is of upper triangular form: whenever we apply M w i we get λ i w i + u i (w i was an eigenvector in the space V i ) where u i ∈ V ⊥ i , the space spanned by v 1 , . . ., v i−1 (which is identical with the space spanned by w 1 , . . ., w i−1 ).Moreover, since we processed every Jordan block entirely, we have that for w i and w j from the same generalized eigenspace (T i,i = T j,j ) that for i > j T j,i ∈ {0, 1} and T j,i = 1 implies i = j + 1. ( In other words, when projected to any generalized eigenspace T consists only of Jordan blocks.Now we change basis again in order to get the upper triangular matrix U defined in (1) from T .For this we define the vectors with nonnegative real numbers α i,j ≥ 0, α i,i > 0, and β > 0 to be determined later.Define the matrices W := (w 1 . . .w k ), Y := (y 1 . . .y k ), and α := (α i,j ) 1≤j≤i≤k .So α is a nonnegative lower triangular matrix with a positive diagonal and hence invertible.Since α and W are invertible, the matrix Y = diag(β)αW is invertible as well and thus the vectors y 1 , . . ., y k form a basis.Moreover, we have y i ∈ Y for each i since α ≥ 0, β > 0, and Y is a convex cone.Therefore we get GY ≤ 0.
We will first choose α.Define T =: D + N where D = diag(λ 1 , . . ., λ k ) is a diagonal matrix and N is nilpotent.Since w 1 is an eigenvector of M we have To get the form in (1), we need for all i > 1 Written in the basis w 1 , . . ., w k (i.e., multiplied with W −1 ), Hence we want to pick α such that First note that these constraints are independent of β if we set µ i−1 := β −1 i−1 > 0, so we can leave assigning a value to β to a later part of the proof.
We distinguish two cases.First, if λ i−1 = λ i , then λ j − λ i is positive for all j < i because larger eigenvalues come first.Since N is nilpotent and upper triangular, N j≤i α i,j e j is a linear combination of e 1 , . . ., e i−1 (i.e., only the first i − 1 entries are nonzero).Whatever values this vector assumes, we can increase the parameters α i,j for j < i to make (11) larger and increase the parameters α i−1,j for j < i to make (11) smaller.
Second, let ℓ be minimal such that λ ℓ = λ i with ℓ = i, then w ℓ , . . ., w j are from the same generalized eigenspace.For the rows 1, . . ., ℓ − 1 we can proceed as we did in the first case and for the rows ℓ, . . ., i − 1 we note that by ( 8) N e j = T j−1,j e j−1 .Hence the remaining constraints (11) which is solved by α i,j+1 T j,j+1 = α i−1,j for ℓ ≤ j < i.This is only a problem if there is a j such that T j−1,j = 0, i.e., if there are multiple Jordan blocks for the same eigenvalue.In this case, we can reduce the dimension of the generalized eigenspace to the dimension of the largest Jordan block by combining all Jordan blocks: if M y i = λy i + y i−1 , and M y j = λy j + y j−1 , then M (y i + y j ) = λ(y i + y j ) + (y i−1 + y j−1 ) and if M y i = λy i + y i−1 , and M y j = λy j , then M (y i + y j ) = λ(y i + y j ) + y i−1 .In both cases we can replace the basis vector y i with y i + y j without reducing the expressiveness of the GNTA.
Importantly, there are no cyclic dependencies in the values of α because neither one of the coefficients α can be made too large.Therefore we can choose α ≥ 0 such that (10) is satisfied for all i > 1 and hence the basis y 1 , . . ., y k brings M into the desired form (1).
Part 2. In this part we construct the geometric nontermination argument and check the constraints from Definition 5. Since L has an infinite execution, there is a point x that fulfills the guard, i.e., Gx ≤ g.We choose x 1 := x + Y γ with γ ≥ 0 to be determined later.Moreover, we choose λ 1 , . . ., λ k and µ 1 , . . ., µ k−1 from the entries of U given in (1).The size of our GNTA is k, the number of vectors y 1 , . . ., y k .These vectors form a basis of Y ∩ U ⊥ , which is a subspace of R n ; thus k ≤ n, as required.
The constraint (domain) is satisfied by construction and the constraint (init) is vacuous since L is a loop program.For (ray) note that from ( 9) and (10) The remainder of this proof shows that we can choose β and γ such that (point) is satisfied, i.e., that The vector x 1 satisfies the guard since Gx 1 = Gx + GY γ ≤ g + 0 according to (9), which yields the first part of (12).For the second part we observe the following.
Since Y is a basis, it is invertible, so with Equation ( 13) is now conveniently in the basis y 1 , . . ., y k and all that remains to show is that we can choose γ ≥ 0 and β > 0 such that (13) is satisfied.We proceed for each (not quite Jordan) block of U separately, i.e., we assume that we are looking at the subspace y j , . . ., y i with µ i = µ j−1 = 0 and µ k > 0 for all j ≤ k < i.If this space only contains eigenvalues that are larger than 1, then U − I is invertible and has only nonnegative entries.By using large enough values for β, we can make x and m small enough, such that 1 ≥ (U − I)x + m.Then we just need to pick γ appropriately.
If there is at least one eigenvalue 1, then U − I is not invertible, so (13) could be overconstraint.Notice that µ ℓ > 0 for all j ≤ ℓ < i, so only the bottom entry in the vector equation ( 13) is not covered by γ.Moreover, since eigenvalues are ordered in decreasing order and all eigenvalues in our current subspace are ≥ 1, we conclude that the eigenvalue for the bottom entry is 1. (Furthermore, i is the highest index since each eigenvalue occurs only in one block).Thus we get the equation mi = 1.If mi is positive, this equation has a solution since we can adjust β i accordingly.If it is zero, then the execution on the space spanned by y i is bounded, which we can rule out by Lemma 12.It remains to rule out that mi is negative.Let U be the generalized eigenspace to the eigenvector 1 and use Lemma 13 below to conclude that o := N k−1 m + u ∈ Y for some u ∈ U ⊥ .We have that M o = M (N k−1 m + u) = M u ∈ U ⊥ , so o is a candidate to pick for the vector w i .Therefore without loss of generality we did so in part 1 of this proof and since y i is in the convex cone spanned by the basis w 1 , . . ., w k we get mi > 0.
⊓ ⊔ Lemma 13 (Deterministic Loops with Eigenvalue 1).Let M = I + N and let N be nilpotent with nilpotence index k (k Proof.We show termination by providing an k-nested ranking function [26,Def. 4.7].By [26,Lem. 3.3] and [26,Thm. 4.10], this implies that L is terminating. According to the premise, GN k−1 m ≤ 0, hence there is at least one positive entry in the vector GN k−1 m.Let h be a row vector of G such that h T N k−1 m =: δ > 0, and let h 0 ∈ R be the corresponding entry in g.Let x be any state and let x ′ be a next state after the loop transition, i.e., x ′ = M x + m.Define the affine-linear functions f j (x) := −h T N k−j x + c j for 1 ≤ j ≤ k with constants c j ∈ R to be determined later.Since every state x satisfies the guard we have h Example 14 (U is not in Jordan Form).The matrix U defined in (1) and used in the completeness proof is generally not the Jordan normal form of the loop's transition matrix M .Consider the following linear loop program.
This program is nonterminating because a grows exponentially and hence faster than b.It has the geometric nontermination argument x 0 = ( 9 1 ) , x 1 = ( 9 1 ) , y 1 = ( 12 0 ) , y 2 = ( 6 1 ) , λ 1 = 3, λ 2 = 1, µ 1 = 1.The matrix corresponding to the linear loop update is which is diagonal (hence diagonalizable).Therefore M is already in Jordan normal form.The matrix U defined according to (1) is The nilpotent component µ 1 = 1 is important and there is no GTNA for this loop program where µ 1 = 0 since the eigenspace to the eigenvector 1 is spanned by (0 1) T which is in Y, but not in Y. ♦

Experiments
We implemented our method in a tool that is specialized for the analysis of lasso programs and called Ultimate LassoRanker.LassoRanker is used by Ultimate Büchi Automizer which analyzes termination of (general) C programs via the following approach [20].Büchi Automizer iteratively picks lasso shaped paths in the control flow graph converts them to lasso programs and lets LassoRanker analyze them.In case LassoRanker was able to prove nontermination a real counterexample to termination was found, in case Las-soRanker was able to provide a termination argument (e.g., a linear ranking function), Büchi Automizer continues the analysis, but only on lasso shaped paths for which the termination arguments obtained in former iterations are not applicable.
We applied Ultimate Büchi Automizer to the 631 benchmarks from the category Termination of the 5th software verification competition SV-COMP 2016 [4] in two different settings: one setting where we use our geometric nontermination arguments (GNTA) and one setting where we only synthesize fixed points (i.e., infinite executions where one state is always repeated).
In both settings the constraints were stated over the integers and we used the SMT solver Z3 [21] with a timeout of 12s to solve our constraints.The overall timeout for the termination analysis was 60s.Using the fixed point setting the tool was able to solve 441 benchmarks and the overall time for solving the (linear) constraints was 56 seconds.Using the GNTA setting the tool was able to solve 487 benchmarks and the overall time for solving the (nonlinear) constraints 2349 seconds.The GNTA setting was able to solve 47 benchmarks that could not be solved using the fixed point setting because there was a nonterminating execution which did not had a fixed point.The fixed point setting was able to solve 2 benchmarks that could not be solved using the GNTA setting because in the latter setting solving the linear constraints took too long.
One line of related work is focused on decidability questions for deterministic lasso programs.Tiwari [34] considered linear loop programs over the reals where only strict inequalities are used in the guard and proved that termination is decidable.Braverman [5] generalized this result to loop programs that use strict and non-strict inequalities in the guard.Furthermore, he proved that termination is also decidable for homogeneous deterministic loop programs over the integers.Rebiha et al. [32] generalized the result to integer loops where the update matrix has only real eigenvalues.Ouaknine et al. [27] generalized the result to integer lassos where the update matrix of the loop is diagonalizable.
Another line of related work is also applicable to nondeterministic programs and uses a constraint-based synthesis of recurrence sets.The recurrence sets are defined by templates [35,18] or the constraint is given in a second order theory for bit vectors [15].These approaches can be used to find nonterminating lassos that do not have a geometric nontermination argument; however, this comes at the price that for nondeterministic programs an ∃∀∃-constraint has to be solved.

Conclusion
We presented a new approach to nontermination analysis for linear lasso programs.This approach is based on geometric nontermination arguments, which are an explicit representation of an infinite execution.These nontermination arguments can be found by solving a set of nonlinear constraints.In Section 4 we showed that the class of nonterminating linear lasso programs that have a geometric nontermination argument is quite large: it contains at least every deterministic linear loop program whose eigenvalues are nonnegative.We expect that this statement can be extended to encompass also negative and complex eigenvalues.
The synthesis of nontermination arguments is useful not only to discover infinite loops in program code, but also to accelerate termination analysis (a nonterminating lasso does not need to be checked for a termination argument) or overflow analysis.Furthermore, since geometric nontermination arguments readily provide an infinite execution, a discovered software fault is transparent to the user.

Corollary 9 (
Bounded Infinite Executions).If the linear loop program L = (true, LOOP) has a bounded infinite execution, then it has a geometric nontermination argument of size 0.
denote some norm.We call an infinite execution (x t ) t≥0 bounded iff there is a real number d ∈ R such that the norm of each state is bounded by d, i.e., |x t | ≤ d for all t (in R n the notion of boundedness is independent of the choice of the norm).Lemma 8 (Fixed Point).Let L = (true, LOOP) be a linear loop program.The linear loop program L has a bounded infinite execution if and only if there is a fixed point x * ∈ R n such that (x * , x * ) ∈ LOOP.Proof.If there is a fixed point x * , then the loop has the infinite bounded execution x * , x * , . ... Conversely, let (x t ) t≥0 be an infinite bounded execution.Boundedness implies that there is an d ∈ R such that |x t | ≤ d for all t.Consider the sequence z the smallest linear subspace of R n that contains Y, i.e., Y = Y − Y using pointwise subtraction, and let Y ⊥ be the linear subspace of R n orthogonal to Y; hence R n = Y ⊕ Y We conclude that P Y ⊥ must be a polytope, and thus it is bounded.By assumption L is nonterminating, so L Y ⊥ is nonterminating, and since P Y ⊥ is bounded, any infinite execution of L Y ⊥ must be bounded.
⊥ .Let P := {x ∈ R n | Gx ≤ g} denote the guard polyhedron.Its projection P Y ⊥ to the subspace Y ⊥ is again a polyhedron.By the decomposition theorem for polyhedra [33, Cor.7.1b], P Y ⊥ = Q + C for some polytope Q and some convex cone C.However, by definition of the subspace Y ⊥ , the convex cone C must be equal to {0}: for any y ∈ C ⊆ Y ⊥ , we have Gy ≤ 0, thus y ∈ Y, and therefore y is orthogonal to itself, i.e., y = 0.