1 Introduction

The problem whether a program is terminating is undecidable in general. One way to approach this problem in practice is to analyze the existence of termination arguments and nontermination arguments. The existence of a certain termination argument like, e.g, a linear ranking function, is decidable [4, 31] and implies termination. However, if we cannot find a linear ranking function we cannot conclude nontermination. Vice versa, the existence of a certain nontermination argument like, e.g, a linear recurrence set [20], is decidable and implies nontermination however, if we cannot find such a recurrence set we cannot conclude termination.

In this paperFootnote 1 we present a new kind of termination argument which we call geometric nontermination argument (GNTA). Unlike a recurrence set, a geometric nontermination argument does not only imply nontermination, it also explicitly represents an infinite program execution. Hence a user sees immediately if the counterexample to termination is a fixpoint or an unbounded diverging execution. An infinite program execution that is represented by a geometric nontermination argument can be written as a pointwise sum of several geometric series. We show that such an infinite execution exists for each deterministic conjunctive loop program that is nonterminating and whose transition matrix has only nonnegative eigenvalues.

We restrict ourselves to linear lasso programs. A lasso program consists of a single while loop that is preceded by straight-line code. The name refers to the lasso shaped form of the control flow graph. Usually, linear lasso programs do not occur as stand-alone programs. Instead, they are used as a finite representation of an infinite path in a control flow graph. For example, in (potentially spurious) counterexamples in termination analysis [6, 16, 21, 22, 24, 25, 32, 33, 37], stability analysis [11, 34], cost analysis [1, 19], or the verification of temporal properties [7, 13,14,15, 18] for programs.

We present a constraint based approach that allow us to check whether a linear conjunctive lasso program has a geometric nontermination argument and to synthesize one if it exists.

Our analysis is motived by the probably simplest form of an infinite executions, namely infinite execution where the same state is always repeated. We call such a state a fixed point. For lasso programs we can reduce the check for the existence of a fixed point to a constraint solving problem as follows. Let us assume that the stem and the loop of the lasso program are given as a formulas over primed and unprimed variables \({\scriptstyle \mathrm {STEM}}(\varvec{x}, \varvec{x}')\) and \({\scriptstyle \mathrm {LOOP}}(\varvec{x}, \varvec{x}')\). The infinite sequence \(\varvec{s}_0, \bar{\varvec{s}}, \bar{\varvec{s}} , \bar{\varvec{s}},\ldots \) is an nonterminating execution of the lasso program iff the assignment \(\varvec{x}_0\mapsto \varvec{s}_0, \bar{\varvec{x}}\mapsto \bar{\varvec{s}}\) is a satisfying assignment for the constraint \({\scriptstyle \mathrm {STEM}}(\varvec{x}_0, \bar{\varvec{x}})\wedge {\scriptstyle \mathrm {LOOP}}(\bar{\varvec{x}}, \bar{\varvec{x}})\). In this paper, we present a constraint that is not only satisfiable if the program has a fixed point, it is also satisfiable if the program has a nonterminating execution that can be written as a pointwise sum of geometric series.

Fig. 1.
figure 1

Three nonterminating linear lasso programs. Each has an infinite execution which is either a geometric series or a pointwise sum of geometric series. The first lasso program is nondeterministic because the variable b gets some nondeterministic value in each iteration.

Let us motivate the representation of infinite executions as sums of geometric series in three steps. The program depicted in Fig. 1a shows a lasso program which does not have a fixed point but the following infinite execution.

$$ \left( {\begin{matrix} {2} \\ {0} \end{matrix}}\right) , \left( {\begin{matrix} {2} \\ {1} \end{matrix}}\right) , \left( {\begin{matrix} {7} \\ {1} \end{matrix}}\right) , \left( {\begin{matrix} {22} \\ {1} \end{matrix}}\right) , \left( {\begin{matrix} {67} \\ {1} \end{matrix}}\right) , \dots $$

We can write this infinite execution as a a geometric series where for \(t>1\) the t-th state is the sum \(\varvec{x_1} + \sum _{i=0}^{t-2} \lambda ^i \varvec{y}\), where we have \(\varvec{x_1} = \left( {\begin{matrix} {2} \\ {1} \end{matrix}}\right) \), \(\varvec{y} = \left( {\begin{matrix} {5} \\ {0} \end{matrix}}\right) \), and \(\lambda = 3\). The state \(\varvec{x_1}\) is the state before the loop was executed before the first time and intuitively \(\varvec{y}\) is the direction in which the execution is moving initially and \(\lambda \) is the speed at which the execution continues to move in this direction.

Next, let us consider the lasso program depicted in Fig. 1b which has the following infinite execution.

$$ \left( {\begin{matrix} {2} \\ {0} \end{matrix}}\right) , \left( {\begin{matrix} {2} \\ {1} \end{matrix}}\right) , \left( {\begin{matrix} {4} \\ {4} \end{matrix}}\right) , \left( {\begin{matrix} {10} \\ {8} \end{matrix}}\right) , \left( {\begin{matrix} {28} \\ {16} \end{matrix}}\right) , \dots $$

We cannot write this execution as a geometric series as we did above. Intuitively, the reason is that the values of both variables are increasing at different speeds and hence this execution is not moving in a single direction. However, we can write this infinite execution as a sum of geometric series where for \(t\in \mathbb {N}\backslash \{0\}\) the t-th state can be written as a sum \(\varvec{x_1} + \sum _{i=0}^{t-2} Y \big ( {\begin{matrix}\lambda _1&{}0\\ 0&{}\lambda _2\end{matrix}} \big )^i \varvec{1}\), where we have \(\varvec{x_1} = \left( {\begin{matrix} {2} \\ {1} \end{matrix}}\right) \), \(\varvec{Y} = \left( \begin{matrix} 2 &{} 0 \\ 0 &{} 1 \end{matrix} \right) \), \(\lambda _1=3,\lambda _2=2\) and \(\varvec{1}\) denotes the column vector of ones. Intuitively, our execution is moving in two different directions at different speeds. The directions are reflected by the column vectors of Y, the values of \(\lambda _1\) and \(\lambda _2\) reflect the respective speeds.

Let us next consider the lasso program in Fig. 1c which has the following infinite execution.

$$ \left( {\begin{matrix} {3} \\ {0} \end{matrix}}\right) , \left( {\begin{matrix} {3} \\ {1} \end{matrix}}\right) , \left( {\begin{matrix} {10} \\ {2} \end{matrix}}\right) , \left( {\begin{matrix} {32} \\ {4} \end{matrix}}\right) , \left( {\begin{matrix} {100} \\ {8} \end{matrix}}\right) , \dots $$

We cannot write this execution as a pointwise sum of geometric series in the form that we used above. Intuitively, the problem is that one of the initial directions contributes at two different speeds to the overall progress of the execution. However, we can write this infinite execution as a pointwise sum of geometric series where for \(t\in \mathbb {N}\backslash \{0\}\) the t-th state can be written as a sum \(\varvec{x_1} + \sum _{i=0}^{t-2} Y \big ( {\begin{matrix}\lambda _1&{}\mu \\ 0&{}\lambda _2\end{matrix}} \big )^i \varvec{1}\), where we have \(\varvec{x_1} = \left( {\begin{matrix} {3} \\ {1} \end{matrix}}\right) \), \(\varvec{Y} = \left( \begin{matrix} 4 &{} 3 \\ 0 &{} 1 \end{matrix} \right) \), \(\lambda _1=3,\lambda _2=2,\mu =1\) and \(\varvec{1}\) denotes the column vector of ones. We call the tuple \((\varvec{x_0}, \varvec{x_1}, Y, \lambda _1, \lambda _2, \mu )\) which we use as a finite representation for the infinite execution a geometric nontermination argument.

In this paper, we formally introduce the notion of a geometric nontermination argument for linear lasso programs (Sect. 3) and we prove that each nonterminating deterministic conjunctive linear loop program whose transition matrix has only nonnegative real eigenvalues has a geometric nontermination argument, i.e., each such nonterminating linear loop program has an infinite execution which can be written as a sum of geometric series (Sect. 4).

2 Preliminaries

We denote vectors \(\varvec{x}\) with bold symbols and matrices with uppercase Latin letters. Vectors are always understood to be column vectors, \(\varvec{1}\) denotes a vector of ones, \(\varvec{0}\) denotes a vector of zeros (of the appropriate dimension), and \(\varvec{e_i}\) denotes the i-th unit vector.

2.1 Linear Lasso Programs

In this work, we consider linear lasso programs, programs that consist of a program step and a single loop. We use binary relations over the program’s states to define the stem and the loop transition relation. Variables are assumed to be real-valued.

We denote by \(\varvec{x}\) the vector of n variables \((x_1, \ldots , x_n)^T \in \mathbb {R}^n\) corresponding to program states, and by \(\varvec{x'} = (x_1', \ldots , x_n')^T \in \mathbb {R}^n\) the variables of the next state.

Definition 1

(Linear Lasso Program). A (conjunctive) linear lasso program \(L = ({\scriptstyle \mathrm {STEM}}, {\scriptstyle \mathrm {LOOP}})\) consists of two binary relations defined by formulas with the free variables \(\varvec{x}\) and \(\varvec{x'}\) of the form

$$ A \left( {\begin{matrix} {\varvec{x}} \\ {\varvec{x'}} \end{matrix}}\right) \le \varvec{b} $$

for some matrix \(A \in \mathbb {R}^{n \times m}\) and some vector \(\varvec{b} \in \mathbb {R}^m\).

A linear loop program is a linear lasso program L without stem, i.e., a linear lasso program such that the relation \({\scriptstyle \mathrm {STEM}}\) is equivalent to true.

Definition 2

(Deterministic Linear Lasso Program). A linear loop program L is called deterministic iff its loop transition \({\scriptstyle \mathrm {LOOP}}\) can be written in the following form

$$ (\varvec{x}, \varvec{x}') \in {\scriptstyle \mathrm {LOOP}}\;\Longleftrightarrow \; G \varvec{x} \le \varvec{g} \;\wedge \; \varvec{x}' = M \varvec{x} + \varvec{m} $$

for some matrices \(G \in \mathbb {R}^{n \times m}\), \(M \in \mathbb {R}^{n \times n}\), and vectors \(\varvec{g} \in \mathbb {R}^m\) and \(\varvec{m} \in \mathbb {R}^n\).

Definition 3

(Nontermination). A linear lasso program L is nonterminating iff there is an infinite sequence of states \(\varvec{x_0}, \varvec{x_1}, \ldots \), called an infinite execution of L, such that \((\varvec{x_0}, \varvec{x_1}) \in {\scriptstyle \mathrm {STEM}}\) and \((\varvec{x_t}, \varvec{x_{t+1}}) \in {\scriptstyle \mathrm {LOOP}}\) for all \(t \ge 1\).

2.2 Jordan Normal Form

Let \(M \in \mathbb {R}^{n \times n}\) be a real square matrix. If there is an invertible square matrix S and a diagonal matrix D such that \(M = SDS^{-1}\), then M is called diagonalizable. The column vectors of S form the basis over which M has diagonal form. In general, real matrices are not diagonalizable. However, every real square matrix M with real eigenvalues has a representation which is almost diagonal, called Jordan normal form. This is a matrix that is zero except for the eigenvalues on the diagonal and one superdiagonal containing ones and zeros.

Formally, a Jordan normal form is a matrix \(J = \mathrm {diag}(J_{i_1}(\lambda _1), \ldots , J_{i_k}(\lambda _k))\) where \(\lambda _1, \ldots , \lambda _k\) are the eigenvalues of M and the real square matrices \(J_i(\lambda ) \in \mathbb {R}^{i \times i}\) are Jordan blocks,

$$ J_i(\lambda ) := \left( \begin{matrix} \lambda &{} 1 &{} 0 &{} \ldots &{} 0 &{} 0 \\ 0 &{} \lambda &{} 1 &{} \ldots &{} 0 &{} 0\\ \vdots &{} &{} &{} \ddots &{} &{} \vdots \\ 0 &{} 0 &{} 0 &{} \ldots &{} \lambda &{} 1 \\ 0 &{} 0 &{} 0 &{} \ldots &{} 0 &{} \lambda \end{matrix} \right) . $$

The subspace corresponding to each distinct eigenvalue is called generalized eigenspace and their basis vectors generalized eigenvectors.

Theorem 4

(Jordan Normal Form). For each real square matrix \(M \in \mathbb {R}^{n \times n}\) with real eigenvalues, there is an invertible real square matrix \(V \in \mathbb {R}^{n \times n}\) and a Jordan normal form \(J \in \mathbb {R}^{n \times n}\) such that \(M = V J V^{-1}\).

3 Geometric Nontermination Arguments

Fix a conjunctive linear lasso program \(L = ({\scriptstyle \mathrm {STEM}}, {\scriptstyle \mathrm {LOOP}})\) and let \(A \in \mathbb {R}^{n \times m}\) and \(\varvec{b} \in \mathbb {R}^m\) define the loop transition such that

$$ (\varvec{x}, \varvec{x}') \in {\scriptstyle \mathrm {LOOP}}\;\Longleftrightarrow \; A \left( {\begin{matrix} {\varvec{x}} \\ {\varvec{x}'} \end{matrix}}\right) \le \varvec{b}. $$

Definition 5

(Geometric Nontermination Argument). A tuple (\(\varvec{x_0}, \varvec{x_1}\), \(\varvec{y_1}, \ldots , \varvec{y_s}, \lambda _1, \ldots , \lambda _s, \mu _1, \ldots , \mu _{s-1}\)) is called a geometric nontermination argument for the linear lasso program \(L = ({\scriptstyle \mathrm {STEM}}, {\scriptstyle \mathrm {LOOP}})\) iff all of the following statements hold.

(domain):

\(\varvec{x_0}, \varvec{x_1}, \varvec{y_1}, \ldots , \varvec{y_s} \in \mathbb {R}^n\), and \(\lambda _1, \ldots , \lambda _s, \mu _1, \ldots , \mu _{s-1} \ge 0\)

(initiation):

\((\varvec{x_0}, \varvec{x_1}) \in {\scriptstyle \mathrm {STEM}}\)

(point):

\(A \left( {\begin{matrix} {\varvec{x_1}} \\ {\varvec{x_1} + \sum _{k=1}^s \varvec{y_k}} \end{matrix}}\right) \le \varvec{b}\)

(ray):

\(A \left( {\begin{matrix} {\varvec{y_1}} \\ {\lambda _1 \varvec{y_1}} \end{matrix}}\right) \le 0\) and \(A \left( {\begin{matrix} {\varvec{y_i}} \\ {\lambda _i \varvec{y_k} + \mu _{k-1} \varvec{y_{k-1}}} \end{matrix}}\right) \le 0\) for each \(k\in \{2 \ldots s\}\).

The number \(s \ge 0\) is the size of the geometric nontermination argument.

The existence of a geometric nontermination argument can be checked using an SMT solver. The constraints given by (domain), (init), (point), (ray) are nonlinear algebraic constraints and the satisfiability of these constraints is decidable.

Proposition 6

(Soundness). If there is a geometric nontermination argument for a linear lasso program L, then L is nonterminating.

Proof

We define \(Y := (\varvec{y_1} \ldots \varvec{y_k})\) as the matrix containing the vectors \(\varvec{y_i}\) as columns, and we define the following matrix.

$$\begin{aligned} U := \left( \begin{matrix} \lambda _1 &{} \mu _1 &{} 0 &{} \ldots &{} 0 &{} 0 \\ 0 &{} \lambda _2 &{} \mu _2 &{} \ldots &{} 0 &{} 0\\ \vdots &{} &{} &{} \ddots &{} &{} \vdots \\ 0 &{} 0 &{} 0 &{} \ldots &{} \lambda _{n-1} &{} \mu _{n-1} \\ 0 &{} 0 &{} 0 &{} \ldots &{} 0 &{} \lambda _n \end{matrix} \right) \end{aligned}$$
(1)

Following Definition 3 we show that the linear lasso program L has the infinite execution

$$\begin{aligned} \varvec{x_0},\quad \varvec{x_1},\quad \varvec{x_1} + Y \varvec{1},\quad \varvec{x_1} + Y \varvec{1} + Y U \varvec{1},\quad \varvec{x_1} + Y \varvec{1} + Y U \varvec{1} + Y U^2 \varvec{1},\quad \ldots \quad \end{aligned}$$
(2)

From (init) we get \((\varvec{x_0}, \varvec{x_1}) \in {\scriptstyle \mathrm {STEM}}\). It remains to show that

$$\begin{aligned} \left( \varvec{x_1} + \sum _{j=0}^{t-1} Y U^j \varvec{1},\; \varvec{x_1} + \sum _{j=0}^t Y U^j \varvec{1} \right) \in {\scriptstyle \mathrm {LOOP}}\text { for all } t \in \mathbb {N}. \end{aligned}$$
(3)

According to (domain) the matrix U has only nonnegative entries, so the same holds for the matrix \(Z := \sum _{j=0}^{t-1} U^j\). Hence \(Z \varvec{1}\) has only nonnegative entries and thus \(YZ\varvec{1}\) can be written as \(\sum _{k=1}^s \alpha _k \varvec{y_k}\) for some \(\alpha _k \ge 0\). We multiply the inequality number k from (ray) with \(\alpha _k\) and get

$$\begin{aligned} A \left( {\begin{matrix} { \alpha _k \varvec{y_k} } \\ { \alpha _k \lambda _k \varvec{y_k} + \alpha _k \mu _{k-1} \varvec{y_{k-1}} } \end{matrix}}\right) \le 0. \end{aligned}$$
(4)

where we define for convenience \(\varvec{y_0} := 0\) and \(\mu _0 := 0\). Now we sum (4) for all k and add (point) to get

(5)

By definition of \(\alpha _k\), we have

$$ \varvec{x_1} + \sum _{k=1}^s \alpha _k \varvec{y_k} ~=~ \varvec{x_1} + Y Z \varvec{1} ~=~ \varvec{x_1} + \sum _{j=0}^{t-1} Y U^j \varvec{1} $$

and

$$\begin{aligned} \varvec{x_1} + \sum _{k=1}^s \varvec{y_k} + \sum _{k=1}^s (\alpha _k \lambda _k \varvec{y_k} + \alpha _k \mu _{k-1} \varvec{y_{k-1}})&= \varvec{x_1} + Y\varvec{1} + \sum _{k=1}^s \alpha _k YU e_k \\&= \varvec{x_1} + Y\varvec{1} + YU Z\varvec{1} \\&= \varvec{x_1} + \sum _{j=0}^t Y U^j \varvec{1}. \end{aligned}$$

Therefore (3) and (5) are the same, which concludes this proof.    \(\square \)

Proposition 7

(Closed Form of the Infinite Execution). For \(t\ge 2\) the following is the closed form of the state \(\varvec{x_t} = \varvec{x_1} + \sum _{j=0}^{t-2} Y U^j \varvec{1}\) in the infinite execution (2). Let \(U =: N + D\) where N is a nilpotent matrix and D is a diagonal matrix.

4 Completeness Results

First we show that a linear loop program has a GNTA if it has is a bounded infinite execution. In the next section we use this to prove our completeness result.

4.1 Bounded Infinite Executions

Let \(|\cdot |: \mathbb {R}^n\rightarrow \mathbb {R}\) denote some norm. We call an infinite execution \((\varvec{x}_t)_{t \ge 0}\) bounded iff there is a real number \(d \in \mathbb {R}\) such that the norm of each state is bounded by d, i.e., \(|\varvec{x}_t|\le d\) for all t (in \(\mathbb {R}^n\) the notion of boundedness is independent of the choice of the norm).

Lemma 8

(Fixed Point). Let \(L = (true, {\scriptstyle \mathrm {LOOP}})\) be a linear loop program. The linear loop program L has a bounded infinite execution if and only if there is a fixed point \(\varvec{x}^*\in \mathbb {R}^n\) such that \((\varvec{x}^*, \varvec{x}^*) \in {\scriptstyle \mathrm {LOOP}}\).

Proof

If there is a fixed point \(\varvec{x}^*\), then the loop has the infinite bounded execution \(\varvec{x}^*, \varvec{x}^*, \ldots \). Conversely, let \((\varvec{x}_t)_{t \ge 0}\) be an infinite bounded execution. Boundedness implies that there is an \(d \in \mathbb {R}\) such that \(|\varvec{x}_t| \le d\) for all t. Consider the sequence \(\varvec{z}_k := \frac{1}{k} \sum _{t=1}^k \varvec{x}_t\).

$$\begin{aligned} | \varvec{z}_k - \varvec{z}_{k+1} |&= \left| \frac{1}{k} \sum _{t=1}^k \varvec{x}_t - \frac{1}{k+1} \sum _{t=1}^{k+1} \varvec{x}_t \right| = \frac{1}{k(k+1)} \left| (k+1) \sum _{t=1}^k \varvec{x}_t - k \sum _{t=1}^{k+1} \varvec{x}_t \right| \\&= \frac{1}{k(k+1)} \left| \sum _{t=1}^k \varvec{x}_t - k \varvec{x}_{k+1} \right| \le \frac{1}{k(k+1)} \left( \sum _{t=1}^k |\varvec{x}_t| + k |\varvec{x}_{k+1}| \right) \\&\le \frac{1}{k(k+1)} (k\cdot d + k\cdot d) = \frac{2d}{k+1} \longrightarrow 0 \text { as } k \rightarrow \infty . \end{aligned}$$

Hence the sequence \((\varvec{z}_k)_{k \ge 1}\) is a Cauchy sequence and thus converges to some \(\varvec{z}^*\in \mathbb {R}^n\). We will show that \(\varvec{z}^*\) is the desired fixed point.

For all t, the polyhedron \(Q := \{ \left( {\begin{matrix} {\varvec{x}} \\ {\varvec{x}'} \end{matrix}}\right) \mid A \left( {\begin{matrix} {\varvec{x}} \\ {\varvec{x}'} \end{matrix}}\right) \le b \}\) contains \( \left( {\begin{matrix} {\varvec{x}_t} \\ {\varvec{x}_{t+1}} \end{matrix}}\right) \) and is convex. Therefore for all \(k \ge 1\),

$$ \frac{1}{k} \sum _{t=1}^k \left( {\begin{matrix} {\varvec{x}_t} \\ {\varvec{x}_{t+1}} \end{matrix}}\right) \in Q. $$

Together with

$$ \left( {\begin{matrix} {\varvec{z}_k} \\ {\frac{k+1}{k} \varvec{z}_{k+1}} \end{matrix}}\right) = \frac{1}{k} \left( {\begin{matrix} {\varvec{0}} \\ {\varvec{x}_1} \end{matrix}}\right) + \frac{1}{k} \sum _{t=1}^k \left( {\begin{matrix} {\varvec{x}_t} \\ {\varvec{x}_{t+1}} \end{matrix}}\right) $$

we infer

$$ \left( \left( {\begin{matrix} {\varvec{z}_k} \\ {\frac{k+1}{k} \varvec{z}_{k+1}} \end{matrix}}\right) - \frac{1}{k} \left( {\begin{matrix} {\varvec{0}} \\ {\varvec{x}_1} \end{matrix}}\right) \right) \in Q, $$

and since Q is topologically closed we have

$$ \left( {\begin{matrix} {\varvec{z}^*} \\ {\varvec{z}^*} \end{matrix}}\right) = \lim _{k \rightarrow \infty } \left( \left( {\begin{matrix} {\varvec{z}_k} \\ {\frac{k+1}{k} \varvec{z}_{k+1}} \end{matrix}}\right) - \frac{1}{k} \left( {\begin{matrix} {\varvec{0}} \\ {\varvec{x}_1} \end{matrix}}\right) \right) \in Q. $$

   \(\square \)

Note that Lemma 8 does not transfer to lasso programs: there might only be one fixed point and the stem might exclude this point (e.g., \(a = -0.5\) and \(b = 3.5\) in example Fig. 1a).

Because fixed points give rise to trivial geometric nontermination arguments, we can derive a criterion for the existence of geometric nontermination arguments from Lemma 8.

Corollary 9

(Bounded Infinite Executions). If the linear loop program \(L = (true, {\scriptstyle \mathrm {LOOP}})\) has a bounded infinite execution, then it has a geometric nontermination argument of size 0.

Proof

By Lemma 8 there is a fixed point \(\varvec{x}^*\) such that \((\varvec{x}^*, \varvec{x}^*) \in {\scriptstyle \mathrm {LOOP}}\). We choose \(\varvec{x_0} = \varvec{x_1} = \varvec{x}^*\) which satisfies (point) and (ray) and thus is a geometric nontermination argument for L.    \(\square \)

Example 10

Note that according to our definition of a linear lasso program, the relation \({\scriptstyle \mathrm {LOOP}}\) is a topologically closed set. If we allowed the formula defining \({\scriptstyle \mathrm {LOOP}}\) to also contain strict inequalities, Lemma 8 no longer holds: the following program is nonterminating and has a bounded infinite execution, but it does not have a fixed point. However, the topological closure of the relation \({\scriptstyle \mathrm {LOOP}}\) contains the fixed point \(a = 0\).

figure a

Nevertheless, this example has a geometric nontermination argument, namely \(\varvec{x_1} = 1\), \(\varvec{y_1} = -0.5\), \(\lambda _1 = 0.5\).    \(\Diamond \)

4.2 Nonnegative Eigenvalues

This section is dedicated to the proof of the following completeness result for deterministic linear loop programs.

Theorem 11

(Completeness). If a deterministic linear loop program L of the form with n variables is nonterminating and M has only nonnegative real eigenvalues, then there is a geometric nontermination argument for L of size at most n.

To prove this completeness theorem, we need to construct a GNTA from a given infinite execution. The following lemma shows that we can restrict our construction to exclude all linear subspaces that have a bounded execution.

Lemma 12

(Loop Disassembly). Let \(L = (true, {\scriptstyle \mathrm {LOOP}})\) be a linear loop program over \(\mathbb {R}^n = \mathcal {U} \oplus \mathcal {V}\) where \(\mathcal {U}\) and \(\mathcal {V}\) are linear subspaces of \(\mathbb {R}^n\). Suppose L is nonterminating and there is an infinite execution that is bounded when projected to the subspace \(\mathcal {U}\). Let \(\varvec{x}^\mathcal {U}\) be the fixed point in \(\mathcal {U}\) that exists according to Lemma 8. Then the linear loop program \(L^{\mathcal {V}}\) that we get by projecting to the subspace \(\mathcal {V} + \varvec{x}^\mathcal {U}\) is nonterminating. Moreover, if \(L^{\mathcal {V}}\) has a GNTA of size s, then L has a GNTA of size s.

Proof

Without loss of generality, we are in the basis of \(\mathcal {U}\) and \(\mathcal {V}\) so that these spaces are nicely separated by the use of different variables. Using the infinite execution of L that is bounded on \(\mathcal {U}\) we can do the construction from the proof of Lemma 8 to get an infinite execution \(\varvec{z_0}, \varvec{z_1}, \ldots \) that yields the fixed point \(\varvec{x}^\mathcal {U}\) when projected to \(\mathcal {U}\). We fix \(\varvec{x}^\mathcal {U}\) in the loop transition by replacing all variables from \(\mathcal {U}\) with the values from \(\varvec{x}^\mathcal {U}\) and get the linear loop program \(L^{\mathcal {V}}\) (this is the projection to \(\mathcal {V} + \varvec{x}^\mathcal {U}\)). Importantly, the projection of \(\varvec{z_0}, \varvec{z_1}, \ldots \) to \(\mathcal {V} + \varvec{x}^{\mathcal {U}}\) is still an infinite execution, hence the loop \(L^{\mathcal {V}}\) is nonterminating. Given a GNTA for \(L^{\mathcal {V}}\) we can construct a GNTA for L by adding the vector \(\varvec{x}^\mathcal {U}\) to \(\varvec{x_0}\) and \(\varvec{x_1}\).    \(\square \)

Proof

(of Theorem 11). The polyhedron corresponding to loop transition of the deterministic linear loop program L is

(6)

Define \(\mathcal {Y}\) to be the convex cone spanned by the rays of the guard polyhedron:

$$ \mathcal {Y} := \{ \varvec{y} \in \mathbb {R}^n \mid G\varvec{y} \le 0 \} $$

Let \(\overline{\mathcal {Y}}\) be the smallest linear subspace of \(\mathbb {R}^n\) that contains \(\mathcal {Y}\), i.e., \(\overline{\mathcal {Y}} = \mathcal {Y} - \mathcal {Y}\) using pointwise subtraction, and let \(\overline{\mathcal {Y}}^\bot \) be the linear subspace of \(\mathbb {R}^n\) orthogonal to \(\overline{\mathcal {Y}}\); hence \(\mathbb {R}^n = \overline{\mathcal {Y}} \oplus \overline{\mathcal {Y}}^\bot \).

Let \(P := \{ \varvec{x} \in \mathbb {R}^n \mid G\varvec{x} \le \varvec{g} \}\) denote the guard polyhedron. Its projection \(P^{\overline{\mathcal {Y}}^\bot }\) to the subspace \(\overline{\mathcal {Y}}^\bot \) is again a polyhedron. By the decomposition theorem for polyhedra [36, Corollary 7.1b], \(P^{\overline{\mathcal {Y}}^\bot } = Q + C\) for some polytope Q and some convex cone C. However, by definition of the subspace \(\overline{\mathcal {Y}}^\bot \), the convex cone C must be equal to \(\{ \varvec{0} \}\): for any \(\varvec{y} \in C \subseteq \overline{\mathcal {Y}}^\bot \), we have \(G\varvec{y} \le \varvec{0}\), thus \(\varvec{y} \in \mathcal {Y}\), and therefore \(\varvec{y}\) is orthogonal to itself, i.e., \(\varvec{y} = \varvec{0}\). We conclude that \(P^{\overline{\mathcal {Y}}^\bot }\) must be a polytope, and thus it is bounded. By assumption L is nonterminating, so \(L^{\overline{\mathcal {Y}}^\bot }\) is nonterminating, and since \(P^{\overline{\mathcal {Y}}^\bot }\) is bounded, any infinite execution of \(L^{\overline{\mathcal {Y}}^\bot }\) must be bounded.

Let \(\mathcal {U}\) denote the direct sum of the generalized eigenspaces for the eigenvalues \(0 \le \lambda < 1\). Any infinite execution is necessarily bounded on the subspace \(\mathcal {U}\) since on this space the map \(\varvec{x} \mapsto M\varvec{x} + \varvec{m}\) is a contraction. Let \(\mathcal {U}^\bot \) denote the subspace of \(\mathbb {R}^n\) orthogonal to \(\mathcal {U}\). The space \(\overline{\mathcal {Y}} \cap \mathcal {U}^\bot \) is a linear subspace of \(\mathbb {R}^n\) and any infinite execution in its complement is bounded. Hence we can turn our analysis to the subspace \(\overline{\mathcal {Y}} \cap \mathcal {U}^\bot + \varvec{x}\) for some \(\varvec{x} \in \overline{\mathcal {Y}}^\bot \oplus \mathcal {U}\) for the rest of the proof according to Lemma 12. From now on, we implicitly assume that we are in this space without changing any of the notation.

Part 1. In this part we show that there is a basis \(\varvec{y_1}, \ldots , \varvec{y_s} \in \mathcal {Y}\) such that M turns into a matrix U of the form given in (1) with \(\lambda _1, \ldots , \lambda _s, \mu _1, \ldots , \mu _{s-1} \ge 0\). Since we allow \(\mu _k\) to be positive between different eigenvalues (Example 14 illustrates why), this is not necessarily a Jordan normal form and the vectors \(\varvec{y_i}\) are not necessarily generalized eigenvectors.

We choose a basis \(\varvec{v_1}, \ldots , \varvec{v_s}\) such that M is in Jordan normal form with the eigenvalues ordered by size such that the largest eigenvalues come first. Define \(\mathcal {V}_1 := \overline{\mathcal {Y}} \cap \mathcal {U}^\bot \) and let \(\mathcal {V}_1 \supset \ldots \supset \mathcal {V}_s\) be a strictly descending chain of linear subspaces where \(\mathcal {V}_i\) is spanned by \(\varvec{v_k}, \ldots , \varvec{v_s}\).

We define a basis \(\varvec{w_1}, \ldots , \varvec{w_s}\) by doing the following for each Jordan block of M, starting with \(k = 1\). Let \(M^{(k)}\) be the projection of M to the linear subspace \(\mathcal {V}_k\) and let \(\lambda \) be the largest eigenvalues of \(M^{(k)}\). The m-fold iteration of a Jordan block \(J_\ell (\lambda )\) for \(m \ge \ell \) is given by

$$\begin{aligned} J_\ell (\lambda )^m = \left( \begin{matrix} \lambda ^m &{} \genfrac(){0.0pt}1{m}{1}\lambda ^{m-1} &{} \dots &{} \genfrac(){0.0pt}1{m}{\ell }\lambda ^{m-\ell } \\ &{} \lambda ^m &{} \dots &{} \genfrac(){0.0pt}1{m}{\ell -1}\lambda ^{m-\ell +1} \\ &{} &{} \ddots &{} \vdots \\ 0 &{} &{} &{} \lambda ^m \end{matrix} \right) \in \mathbb {R}^{\ell \times \ell }. \end{aligned}$$
(7)

Let \(\varvec{z_0}, \varvec{z_1}, \varvec{z_2}, \ldots \) be an infinite execution of the loop L in the basis \(\varvec{v_k}, \ldots , \varvec{v_s}\) projected to the space \(\mathcal {V}_k\). Since by Lemma 12 we can assume that there are no fixed points on this space, \(|\varvec{z_t}| \rightarrow \infty \) as \(t \rightarrow \infty \) in each of the top \(\ell \) components. Asymptotically, the largest eigenvalue \(\lambda \) dominates and in each row of \(J_k(\lambda _k)^m\) (7), the entries \(\genfrac(){0.0pt}1{m}{j}\lambda ^{m-j}\) in the rightmost column grow the fastest with an asymptotic rate of \(\varTheta (m^j \exp (m))\). Therefore the sign of the component corresponding to basis vector \(\varvec{v_{k+\ell }}\) determines whether the top \(\ell \) entries tend to \(+\infty \) or \(-\infty \), but the top \(\ell \) entries of \(\varvec{z_t}\) corresponding to the top Jordan block will all have the same sign eventually. Because no state can violate the guard condition we have that the guard cannot constraint the infinite execution in the direction of \(\varvec{v_j}\) or \(-\varvec{v_j}\), i.e., \(G^{\mathcal {V}_k} \varvec{v_j} \le \varvec{0}\) for each \(j\in \{k, \ldots , k+\ell \}\) or \(G^{\mathcal {V}_k} \varvec{v_j} \ge \varvec{0}\) for each \(j\in \{k, \ldots , k+\ell \}\), where \(G^{\mathcal {V}_k}\) is the projection of G to the subspace \(\mathcal {V}_k\). So without loss of generality the former holds (otherwise we use \(-\varvec{v_j}\) instead of \(\varvec{v_j}\) for \(j\in \{k, \ldots , k+\ell \}\)) and for \(j\in \{k, \ldots , k+\ell \}\) we get \(\varvec{v_j} \in \mathcal {Y} + \mathcal {V}_k^\bot \) where \(\mathcal {V}_k^\bot \) is the space spanned by \(\varvec{v_1}, \ldots , \varvec{v_{k-1}}\). Hence there is a \(\varvec{u_j} \in \mathcal {V}_k^\bot \) such that \(\varvec{w_j} := \varvec{v_j} + \varvec{u_j}\) is an element of \(\mathcal {Y}\). Now we move on to the subspace \(\mathcal {V}_{k+\ell +1}\), discarding the top Jordan block.

Let T be the matrix M written in the basis \(\varvec{w_1}, \ldots , \varvec{w_k}\). Then T is of upper triangular form: whenever we apply \(M\varvec{w_k}\) we get \(\lambda _k \varvec{w_k} + \varvec{u_k}\) (\(\varvec{w_k}\) was an eigenvector in the space \(\mathcal {V}_k\)) where \(\varvec{u_k} \in \mathcal {V}_k^\bot \), the space spanned by \(\varvec{v_1}, \ldots , \varvec{v_{k-1}}\) (which is identical with the space spanned by \(\varvec{w_1}, \ldots , \varvec{w_{k-1}}\)). Moreover, since we processed every Jordan block entirely, we have that for \(\varvec{w_k}\) and \(\varvec{w_j}\) from the same generalized eigenspace (\(T_{k,k} = T_{j,j}\)) that for \(k > j\)

$$\begin{aligned} T_{j,k} \in \{ 0, 1 \} \text { and } T_{j,k} = 1 \text { implies } k = j+1. \end{aligned}$$
(8)

In other words, when projected to any generalized eigenspace T consists only of Jordan blocks.

Now we change basis again in order to get the upper triangular matrix U defined in (1) from T. For this we define the vectors

$$ \varvec{y_k} := \varvec{\beta }_k \sum _{j=1}^k \alpha _{k,j} \varvec{w_j}. $$

with nonnegative real numbers \(\alpha _{k,j} \ge 0\), \(\alpha _{k,k} > 0\), and \(\varvec{\beta }> 0\) to be determined later. Define the matrices \(W := (\varvec{w_1} \ldots \varvec{w_s})\), \(Y := (\varvec{y_1} \ldots \varvec{y_s})\), and \(\alpha := (\alpha _{k,j})_{1 \le j \le k \le s}\). So \(\alpha \) is a nonnegative lower triangular matrix with a positive diagonal and hence invertible. Since \(\alpha \) and W are invertible, the matrix \(Y = \mathrm {diag}(\varvec{\beta }) \alpha W\) is invertible as well and thus the vectors \(\varvec{y_1}, \ldots , \varvec{y_s}\) form a basis. Moreover, we have \(\varvec{y_k} \in \mathcal {Y}\) for each k since \(\alpha \ge 0\), \(\varvec{\beta }> 0\), and \(\mathcal {Y}\) is a convex cone. Therefore we get

$$\begin{aligned} GY \le 0. \end{aligned}$$
(9)

We will first choose \(\alpha \). Define \(T =: D + N\) where \(D = \mathrm {diag}(\lambda _1, \ldots , \lambda _s)\) is a diagonal matrix and N is nilpotent. Since \(\varvec{w_1}\) is an eigenvector of M we have \(M\varvec{y_1} = M \varvec{\beta }_1 \alpha _{1,1} \varvec{w_1} = \lambda _1 \varvec{\beta }_1 \alpha _{1,1} \varvec{w_1} = \lambda _1 \varvec{y_1}\). To get the form in (1), we need for all \(k > 1\)

$$\begin{aligned} M\varvec{y_k} = \lambda _k \varvec{y_k} + \mu _{k-1} \varvec{y_{k-1}}. \end{aligned}$$
(10)

Written in the basis \(\varvec{w_1}, \ldots , \varvec{w_s}\) (i.e., multiplied with \(W^{-1}\)),

$$ (D + N) \varvec{\beta }_k \sum _{j \le k} \alpha _{k,j} \varvec{e_j} = \lambda _k \varvec{\beta }_k \sum _{j \le k} \alpha _{k,j} \varvec{e_j} + \mu _{k-1} \varvec{\beta }_{k-1} \sum _{j < k} \alpha _{k-1,j} \varvec{e_j}. $$

Hence we want to pick \(\alpha \) such that

$$\begin{aligned} \sum _{j \le k} \alpha _{k,j} (\lambda _j - \lambda _k) \varvec{e_j} + N \sum _{j \le k} \alpha _{k,j} \varvec{e_j} - \mu _{k-1} \varvec{\beta }_{k-1} \sum _{j < k} \alpha _{k-1,j} \varvec{e_j} = \varvec{0}. \end{aligned}$$
(11)

First note that these constraints are independent of \(\varvec{\beta }\) if we set \(\mu _{k-1} := \varvec{\beta }_{k-1}^{-1} > 0\), so we can leave assigning a value to \(\varvec{\beta }\) to a later part of the proof.

We distinguish two cases. First, if \(\lambda _{k-1} \ne \lambda _k\), then \(\lambda _j - \lambda _k\) is positive for all \(j < k\) because larger eigenvalues come first. Since N is nilpotent and upper triangular, \(N \sum _{j \le k} \alpha _{k,j} \varvec{e_j}\) is a linear combination of \(\varvec{e_1}, \ldots , \varvec{e_{k-1}}\) (i.e., only the first \(k-1\) entries are nonzero). Whatever values this vector assumes, we can increase the parameters \(\alpha _{k,j}\) for \(j < k\) to make (11) larger and increase the parameters \(\alpha _{k-1,j}\) for \(j < k\) to make (11) smaller.

Second, let \(\ell \) be minimal such that \(\lambda _\ell = \lambda _k\) wkth \(\ell \ne k\), then \(\varvec{w_\ell }, \ldots , \varvec{w_j}\) are from the same generalized eigenspace. For the rows \(1, \ldots , \ell -1\) we can proceed as we did in the first case and for the rows \(\ell , \ldots , k-1\) we note that by (8) \(N \varvec{e_j} = T_{j-1,j} \varvec{e_{j-1}}\). Hence the remaining constraints (11) are

$$ \sum _{\ell< j \le k} \alpha _{k,j} T_{j-1,j} \varvec{e_{j-1}} - \mu _{k-1} \sum _{\ell \le j < k} \alpha _{k-1,j} \varvec{e_j} = \varvec{0}, $$

which is solved by \(\alpha _{k,j+1} T_{j,j+1} = \alpha _{k-1,j}\) for \(\ell \le j < k\). This is only a problem if there is a j such that \(T_{j-1,j} = 0\), i.e., if there are multiple Jordan blocks for the same eigenvalue. In this case, we can reduce the dimension of the generalized eigenspace to the dimension of the largest Jordan block by combining all Jordan blocks: if \(M\varvec{y_k} = \lambda \varvec{y_k} + \varvec{y_{k-1}}\), and \(M\varvec{y_j} = \lambda \varvec{y_j} + \varvec{y_{j-1}}\), then \(M(\varvec{y_k} + \varvec{y_j}) = \lambda (\varvec{y_k} + \varvec{y_j}) + (\varvec{y_{k-1}} + \varvec{y_{j-1}})\) and if \(M\varvec{y_k} = \lambda \varvec{y_k} + \varvec{y_{k-1}}\), and \(M\varvec{y_j} = \lambda \varvec{y_j}\), then \(M(\varvec{y_k} + \varvec{y_j}) = \lambda (\varvec{y_k} + \varvec{y_j}) + \varvec{y_{k-1}}\). In both cases we can replace the basis vector \(\varvec{y_k}\) with \(\varvec{y_k} + \varvec{y_j}\) without reducing the expressiveness of the GNTA.

Importantly, there are no cyclic dependencies in the values of \(\alpha \) because neither one of the coefficients \(\alpha \) can be made too large. Therefore we can choose \(\alpha \ge 0\) such that (10) is satisfied for all \(k > 1\) and hence the basis \(\varvec{y_1}, \ldots , \varvec{y_s}\) brings M into the desired form (1).

Part 2. In this part we construct the geometric nontermination argument and check the constraints from Definition 5. Since L has an infinite execution, there is a point \(\varvec{x}\) that fulfills the guard, i.e., \(G\varvec{x} \le \varvec{g}\). We choose \(\varvec{x_1} := \varvec{x} + Y\varvec{\gamma }\) with \(\varvec{\gamma }\ge \varvec{0}\) to be determined later. Moreover, we choose \(\lambda _1, \ldots , \lambda _s\) and \(\mu _1, \ldots , \mu _{s-1}\) from the entries of U given in (1). The size of our GNTA is s, the number of vectors \(\varvec{y_1}, \ldots , \varvec{y_s}\). These vectors form a basis of \(\overline{\mathcal {Y}} \cap \mathcal {U}^\bot \), which is a subspace of \(\mathbb {R}^n\); thus \(s \le n\), as required.

The constraint (domain) is satisfied by construction and the constraint (init) is vacuous since L is a loop program. For (ray) note that from (9) and (10) we get

The remainder of this proof shows that we can choose \(\varvec{\beta }\) and \(\varvec{\gamma }\) such that (point) is satisfied, i.e., that

$$\begin{aligned} G\varvec{x_1} \le \varvec{g} \text { and } M \varvec{x_1} + \varvec{m} = \varvec{x_1} + Y\varvec{1}. \end{aligned}$$
(12)

The vector \(\varvec{x_1}\) satisfies the guard since \(G\varvec{x_1} = G\varvec{x} + G Y \varvec{\gamma }\le \varvec{g} + \varvec{0}\) according to (9), which yields the first part of (12). For the second part we observe the following.

(13)

with \(\varvec{\tilde{x}} := Y^{-1}\varvec{x} = W^{-1} \alpha ^{-1} \mathrm {diag}(\varvec{\beta })^{-1} \varvec{x}\) and \(\varvec{\tilde{m}} := Y^{-1} \varvec{m} = W^{-1} \alpha ^{-1} \mathrm {diag}(\varvec{\beta })^{-1} \varvec{m}\). Equation (13) is now conveniently in the basis \(\varvec{y_1}, \ldots , \varvec{y_s}\) and all that remains to show is that we can choose \(\varvec{\gamma }\ge \varvec{0}\) and \(\varvec{\beta }> 0\) such that (13) is satisfied.

We proceed for each (not quite Jordan) block of U separately, i.e., we assume that we are looking at the subspace \(\varvec{y_j}, \ldots , \varvec{y_k}\) with \(\mu _k = \mu _{j-1} = 0\) and \(\mu _\ell > 0\) for all \(\ell \in \{j,\ldots ,k-1\}\). If this space only contains eigenvalues that are larger than 1, then \(U - I\) is invertible and has only nonnegative entries. By using large enough values for \(\varvec{\beta }\), we can make \(\varvec{\tilde{x}}\) and \(\varvec{\tilde{m}}\) small enough, such that \(\varvec{1} \ge (U - I)\varvec{\tilde{x}} + \varvec{\tilde{m}}\). Then we just need to pick \(\varvec{\gamma }\) appropriately.

If there is at least one eigenvalue 1, then \(U - I\) is not invertible, so (13) could be overconstraint. Notice that \(\mu _\ell > 0\) for all \(\ell \in \{j,\ldots ,k-1\}\), so only the bottom entry in the vector Eq. (13) is not covered by \(\varvec{\gamma }\). Moreover, since eigenvalues are ordered in decreasing order and all eigenvalues in our current subspace are \(\ge 1\), we conclude that the eigenvalue for the bottom entry is 1. (Furthermore, k is the highest index since each eigenvalue occurs only in one block). Thus we get the equation \(\varvec{\tilde{m}}_k = 1\). If \(\varvec{\tilde{m}}_k\) is positive, this equation has a solution since we can adjust \(\varvec{\beta }_k\) accordingly. If it is zero, then the execution on the space spanned by \(\varvec{y_k}\) is bounded, which we can rule out by Lemma 12.

It remains to rule out that \(\varvec{\tilde{m}}_k\) is negative. Let \(\mathcal {U}\) be the generalized eigenspace to the eigenvector 1 and use Lemma 13 below to conclude that \(\varvec{o} := N^{s-1}\varvec{m} + \varvec{u} \in \mathcal {Y}\) for some \(\varvec{u} \in \mathcal {U}^\bot \). We have that \(M\varvec{o} = M(N^{s-1}\varvec{m} + \varvec{u}) = M\varvec{u} \in \mathcal {U}^\bot \), so \(\varvec{o}\) is a candidate to pick for the vector \(\varvec{w_k}\). Therefore without loss of generality we did so in part 1 of this proof and since \(\varvec{y_k}\) is in the convex cone spanned by the basis \(\varvec{w_1}, \ldots , \varvec{w_s}\) we get \(\varvec{\tilde{m}}_k > 0\).    \(\square \)

Lemma 13

(Deterministic Loops with Eigenvalue 1). Let \(M = I + N\) and let N be nilpotent with nilpotence index k (\(k := \min \{ i \mid N^i = 0 \}\)). If \(GN^{k-1} \varvec{m} \not \le \varvec{0}\), then L is terminating.

Proof

We show termination by providing an k-nested ranking function [28, Definition 4.7]. By [28, Lemma 3.3] and [28, Theorem 4.10], this implies that L is terminating.

According to the premise, \(G N^{k-1} \varvec{m} \not \le 0\), hence there is at least one positive entry in the vector \(G N^{k-1} \varvec{m}\). Let \(\varvec{h}\) be a row vector of G such that \(\varvec{h}^T N^{k-1} \varvec{m} =: \delta > 0\), and let \(h_0 \in \mathbb {R}\) be the corresponding entry in \(\varvec{g}\). Let \(\varvec{x}\) be any state and let \(\varvec{x'}\) be a next state after the loop transition, i.e., \(\varvec{x'} = M \varvec{x} + \varvec{m}\). Define the affine-linear functions \(f_j(\varvec{x}) := -\varvec{h}^T N^{k-j} \varvec{x} + c_j\) for \(1 \le j \le k\) with constants \(c_j \in \mathbb {R}\) to be determined later. Since every state \(\varvec{x}\) satisfies the guard we have \(\varvec{h}^T\varvec{x} \le h_0\), hence \(f_k(\varvec{x}) = -\varvec{h}^T \varvec{x} + c_k \ge - h_0 + c_k > 0\) for \(c_k := h_0 + 1\).

$$\begin{aligned} f_1(\varvec{x'}) = f_1(\varvec{x} + N\varvec{x} + \varvec{m})&= -\varvec{h}^T N^{k-1} (\varvec{x} + N \varvec{x} + \varvec{m}) + c_1 \\&= f_1(\varvec{x}) - \varvec{h}^T N^k \varvec{x} - \varvec{h}^T N^{k-1} \varvec{m} \\&< f_1(\varvec{x}) - 0 - \delta \end{aligned}$$

For \(1 < j \le k\),

$$\begin{aligned} f_j(\varvec{x'}) = f_j (\varvec{x} + N \varvec{x} + \varvec{m})&= -\varvec{h}^T N^{k-j} (\varvec{x} + N \varvec{x} + \varvec{m}) + c_j \\&= f_j(\varvec{x}) + f_{j-1}(\varvec{x}) - \varvec{h}^T N^{k-j} \varvec{m} - c_{j-1} \\&< f_j(\varvec{x}) + f_{j-1}(\varvec{x}) \end{aligned}$$

for \(c_{j-1} := -\varvec{h}^T N^{k-j} \varvec{m} - 1\).    \(\square \)

Example 14

(U is not in Jordan Form). The matrix U defined in (1) and used in the completeness proof is generally not the Jordan normal form of the loop’s transition matrix M. Consider the following linear loop program.

figure b

This program is nonterminating because a grows exponentially and hence faster than b. It has the geometric nontermination argument

$$\begin{aligned} \varvec{x_0}&= \left( {\begin{matrix} {9} \\ {1} \end{matrix}}\right) ,&\varvec{x_1}&= \left( {\begin{matrix} {9} \\ {1} \end{matrix}}\right) ,&\varvec{y_1}&= \left( {\begin{matrix} {12} \\ {0} \end{matrix}}\right) ,&\varvec{y_2}&= \left( {\begin{matrix} {6} \\ {1} \end{matrix}}\right) ,&\lambda _1&= 3,&\lambda _2&= 1,&\mu _1&= 1. \end{aligned}$$

The matrix corresponding to the linear loop update is

$$ M = \left( \begin{matrix} 3 &{} 0 \\ 0 &{} 1 \end{matrix} \right) $$

which is diagonal (hence diagonalizable). Therefore M is already in Jordan normal form. The matrix U defined according to (1) is

$$ U = \left( \begin{matrix} 3 &{} 1 \\ 0 &{} 1 \end{matrix} \right) . $$

The nilpotent component \(\mu _1 = 1\) is important and there is no GTNA for this loop program where \(\mu _1 = 0\) since the eigenspace to the eigenvalue 1 is spanned by \((0\; 1)^T\) which is in \(\overline{\mathcal {Y}}\), but not in \(\mathcal {Y}\).    \(\Diamond \)

5 Experiments

We implemented our method in a tool that is specialized for the analysis of lasso programs and called Ultimate LassoRankerFootnote 2. LassoRanker is used by Ultimate Büchi Automizer [22] which analyzes termination of (general) C programs. Büchi Automizer iteratively picks lasso shaped paths in the control flow graph converts them to lasso programs and lets LassoRanker analyze them. In case LassoRanker was able to prove nontermination a real counterexample to termination was found, in case LassoRanker was able to provide a termination argument (e.g., a linear ranking function), Büchi Automizer continues the analysis, but only on lasso shaped paths for which the termination arguments obtained in former iterations are not applicable.

We applied Büchi Automizer to the 803 C programs from the Termination Competition 2017Footnote 3 Our constraints for the existence of a geometric nontermination arguments (GNTA) were stated over the integers and we used the SMT solver Z3 [23] with a timeout of 12 s to solve these constraints. The overall timeout for the termination analysis was 60s. In our implementation, LassoRanker first tries to find a fixpoint for a lasso and only if not fixpoint exists, it tries to find a GNTA that can also represent an unbounded execution. The tool was able to identify 143 nonterminating programs. For 82 of these a fixpoint was detected. For the other 61 programs the counterexample had only an unbounded execution but not fixpoint.

This experiment demonstrates that despite the nonlinear integer constraint the synthesis of GNTA is feasible in practice and that furthermore GNTAs which can also represent unbounded executions improved Büchi Automizer significantly.

6 Related Work

One line of related work is focused on decidability questions for deterministic lasso programs. Tiwari [38] considered linear loop programs over the reals where only strict inequalities are used in the guard and proved that termination is decidable. Braverman [5] generalized this result to loop programs that use strict and non-strict inequalities in the guard. Furthermore, he proved that termination is also decidable for homogeneous deterministic loop programs over the integers. Rebiha et al. [35] generalized the result to integer loops where the update matrix has only real eigenvalues. Ouaknine et al. [30] generalized the result to integer lassos where the update matrix of the loop is diagonalizable.

Another line of related work is also applicable to nondeterministic programs and uses a constraint-based synthesis of recurrence sets. The recurrence sets are defined by templates [20, 39] or the constraint is given in a second order theory for bit vectors [17]. These approaches can be used to find nonterminating lassos that do not have a geometric nontermination argument; however, this comes at the price that for nondeterministic programs an \(\exists \forall \exists \)-constraint has to be solved.

Furthermore, there is a long line of research [2, 3, 8,9,10, 12, 17, 26, 27] that addresses programs that are more general than lasso programs.

7 Conclusion

We presented a new approach to nontermination analysis for (nondeterministic) linear lasso programs. This approach is based on geometric nontermination arguments, which are an explicit representation of an infinite execution. Unlike, e.g., a recurrence set which encodes a set of nonterminating executions, a user can immediate see if our nonterminating proof encodes a fixpoint or a diverging unbounded execution. Our nontermination arguments can be found by solving a set of nonlinear constraints. In Sect. 4 we showed that the class of nonterminating linear lasso programs that have a geometric nontermination argument is quite large: it contains at least every deterministic linear loop program whose eigenvalues are nonnegative. We expect that this statement can be extended to encompass also negative and complex eigenvalues.