Unbounded-Time Safety Verification of Guarded LTI Models with Inputs by Abstract Acceleration

Reachability analysis of dynamical models is a relevant problem that has seen much progress in the last decades, however with clear limitations pertaining to the nature of the dynamics and the soundness of the results. This article focuses on sound safety verification of unbounded-time (infinite-horizon) linear time-invariant (LTI) models with inputs using reachability analysis. We achieve this using counterexample-guided Abstract Acceleration: this approach over-approximates the reachability tube of the LTI model over an unbounded time horizon by using abstraction, possibly finding concrete counterexamples for refinement based on the given safety specification. The technique is applied to a number of LTI models and the results show robust performance when compared to state-of-the-art tools.


Introduction
Linear loops are a ubiquitous programming pattern [47]. Linear loops iterate over continuous variables (in the case of physical systems) or discrete variables (in the case of loops in digital programs), which are updated using a linear transformation. Linear loops may be guarded, i.e., halt if a given linear condition holds: at this point the system may either elicit a new behaviour, or simply terminate. Inputs from the environment can be modelled by means of non-deterministic choices within the loop. These features make linear loops expressive enough to capture the dynamics of many hybrid dynamical models [6,47]. The usage of such models in safety-critical embedded systems makes linear loops a fundamental target for formal methods.
Many high-level requirements for embedded control systems can be modelled as safety properties, i.e., deciding reachability of certain bad states, for which the model exhibits unsafe (think for example of speed limits) and/or to change the behaviour of the model under certain conditions (e.g., once we have reached a certain state we begin a new process).
In particular, the special case where G = (i.e., "while true") represents a timeunbounded loop with no guards, for which the discovery of a suitable invariant is paramount.

Model Semantics
The traces of the model starting from an initial set X 0 ⊆ R p , with inputs restricted to the set U ⊆ R q , are sequences x 0 . ., where x 0 ∈ X 0 and ∀k ≥ 0, x k+1 = τ (x k , u k ) and u k ∈ U , satisfying: We extend the point-wise notation above to convex sets of states and inputs (X k and U ), and denote the set of states reached from set X k by τ in one step as: We furthermore denote the set of states reached from X 0 via τ in n steps (n-reach set, constrained by G), for n ≥ 0: Since the sets X 0 and U are convex, the transformations A and B are linear, and vector sums preserve convexity, the sets X n = τ n (X 0 , U ) are also convex. We define the n-reach tubê X n =τ n (X 0 , U ) = k∈ [0,...,n] τ k (X 0 , U ) In the case of the Jordan Form, the eigenvectors corresponding to repeated eigenvalues are called generalised eigenvectors and have the property that ( A − λ s I) j v j = 0, where v j is the jth generalised eigenvector related to eigenvalue λ s . Finally, the eigenvalues of a real matrix may be complex numbers. This is inconvenient in the subsequent analysis, so rather than using complex arithmetic on these numbers, we choose a different representation over the pseudo-eigenspace. Pseudo-eigendecomposition relies on the observation that complex eigenvalues of a real matrix always come in conjugate pairs. Relaxing the restriction of non-zero values on the main diagonal to include the immediate off-diagonal terms, we leverage the following equivalence: where re(v) and im(v) are the real and imaginary part of v, respectively. In the case of a non-diagonal Jordan form, the columns are rearranged first (including the upper diagonal ones), and the conversion above is then performed. This representation is also called the Real Jordan Form.

Definition of Support Functions
A support function is a convex function defined over the vector space R p , which describes the distance of a supporting hyperplane to the origin from a given set in R p , as illustrated in Fig. 1. Support functions can be used to describe a set by defining the distance of its convex hull with respect to the origin, given a number of directions. More specifically, a support function characterises the distance from the origin to the hyperplane that is orthogonal to the given direction and that touches its convex hull at its farthest. For example, the support function of a sphere centred at the origin given any unit vector v in R 3 , evaluates to the radius of the sphere.
The intersection of multiple half spaces, each obtained by sampling a support function in a specific direction, can generate a polyhedron (see Fig. 2), as discussed further in the next section. Finitely sampled support functions (i.e., using a limited number of directions) are template polyhedra in which the directions are not fixed, which helps to avoid wrapping effects (wherein sampling in given directions creates an over-approximation of a set that is not aligned with said directions). The larger the number of distinct directions provided, the more precisely represented the set is. In more detail, given a direction v ∈ R p , the support function of a non-empty set X ⊆ R p in the direction of v is defined as x i v i is the dot product of the two vectors. Support functions apply to any non-empty set X ⊆ R p , but they are most useful when representing convex sets. We will restrict ourselves to the use of convex polyhedra, in which case the definition of the support function translates to solving the following linear program:   6 ). The resulting polyhedron is an over-approximation of the original set (note that directions need not be symmetrical)

Properties of Support Functions
Several properties of support functions allow us to reduce the complexity of operations. The most significant ones are [34]: where v, v 1 , v 2 ∈ R p . As can be seen by their structure, some of these properties reduce the complexity to lower-order polynomial or even to constant time, by turning matrix-matrix multiplications (O( p 3 )) into matrix-vector (O( p 2 )), or into scalar (O( p)) multiplications.

Support Functions in Complex Spaces
The literature does not state, to the best of our knowledge, any use of support functions in complex spaces. Since we are applying their concept to eigenspaces, which may have complex conjugate eigenvalues, we extend the definition of support functions to encompass the corresponding operations on complex spaces, which are given explicitly.

Theorem 1 A support function in a complex vector field is a transformation:
where r e(·) defines the real part of a complex number. The dot product used here is commonly defined in a complex space as: where the element b * i is the complex conjugate of b i .

Proof
Let f : C p → R 2 p and Using abstract interpretation [23] we define a Galois connection α(x) = f (x) and γ (x ) = f −1 (x ), which is clearly a one-to-one relation. We can therefore establish an equivalence between ρ X (v) = ρ X (v ).
As we shall see later, the imaginary part of the dot product is not relevant to the support function, and we therefore disregard it. Using properties of support functions, we now have: which is consistent with the real case when θ = 0. The reason why e iθ cannot be factored out as a constant is because it executes a rotation on the vector, and therefore follows the same rules as a matrix multiplication, namely: Notice the resemblance of this matrix to a pseudo-eigenvalue representation (see Eq. (6)). Since the vectors we are interested in are conjugate pairs (because they have been created by a transformation into a matrix eigenspace), we can transform our problem into a pseudoeigenvalue representation. Since this removes the imaginary component from the setup, we can evaluate the support function using standard methods and properties, by transforming the conjugate pairs into separate vectors representing the real and imaginary parts and rotation matrices as in the equation above.
Recently, Adimoolam and Dang [1] have presented a calculus for complex zonotopes that exploits the same idea, namely using complex eigenvalues to represent ellipsoidal and linear boundaries for reach sets. Their projections into real space correspond to our semi-spherical approximations for variable inputs. Apart from the basic difference in domain (zonotopes vs. polyhedra), which in itself changes some aspects of the problem, the main difference is that the authors perform their analysis over the complex space, whereas we ultimately apply the method in the real space by using pseudo-eigenspaces.

Convex Polyhedra
A polyhedron is a subset of R p with planar faces. Each face is supported by a hyperplane that creates a half-space, and the intersections of these hyperplanes are the edges (and vertices) of the polyhedron. A polyhedron is said to be convex if a line segment joining any two points of its surface is contained within its interior. Convex polyhedra are better suited than general polyhedra to define an abstract domain, mainly because they have a simpler representation and because operations over convex polyhedra are in general easier than over general polyhedra. There are a number of properties of convex polyhedra that make them ideal for abstract interpretation over continuous spaces, including their ability to reduce an uncountable set of real points into a countable set of faces, edges and vertices. Convex polyhedra retain their convexity across linear transformations, and are functional across a number of operations because they have a dual representation [32], as detailed next. The algorithm to switch between these two representations is given in Sect. 2.6.5.

Vertex Representation
Since every edge in the polyhedron corresponds to a line between two vertices, and every face corresponds to the area enclosed by a set of co-planar edges, a full description of the polyhedron is obtained simply by listing its vertices. Since linear operations retain the topological properties of the polyhedron, performing these operations on the vertices is sufficient to obtain a complete description of the transformed polyhedron (defined by the transformed vertices). Formally, a polyhedron can be described as a set V ∈ R p such that v ∈ V is a vertex of the polyhedron.

Inequality Representation (a.k.a. Face Representation)
The dual of the Vertex representation is the inequality representation, where each inequality represents a face of the polyhedron. Each face corresponds to a bounding hyperplane of the polyhedron (with the edges being the intersection of two hyperplanes and the vertices being the intersection of p or more hyperplanes), and is described mathematically as a function of the vector that is normal to the hyperplane. This representation can be made minimal by eliminating redundant inequalities that do not correspond to any face of the polyhedron. If we examine this description closely, we can see that it corresponds to the support function of the vector normal to the hyperplane. Given this description we formalise the following: A convex polyhedron is a topological region in R p described by the set where the rows C i, * for i ∈ [1, . . . , m] correspond to the transposed vectors normal to the faces of the polyhedron, and d i for i ∈ [1, . . . , m] to the value of the support function of X in the corresponding direction. For simplicity in the presentation, we will extend the use of the support function operator as follows:

Operations on Convex Polyhedra
There are a number of operations that we need to be able to perform on convex polyhedra.

Translations
Given a vertex representation V and a translation vector t, the transformed polyhedron is: Given an inequality representation X and a translation vector t, the transformed polyhedron is:

Linear Transformations
Given a vertex representation V and a linear transformation L, the transformed polyhedron is: Given an inequality representation X and a linear transformation L, the transformed polyhedron corresponds to where L + represents the pseudo-inverse of L [55]. In the case when the inverse L −1 exists, then: From this we can conclude that linear transformations are more efficient when using vertex representation, except when the inverse of the transformation exists and is known a priori. This work makes use of this assumption to avoid alternating between representations.

Set Sums
The addition of two polyhedra is defined such that the resulting set contains the sum of all pairs of points inside the original polyhedra. This operation is commonly known as the Minkowski sum, namely: Given two vertex representations V 1 and V 2 , the resulting polyhedron is where conv(·) is the convex hull of the set of vertices contained in the Minkowski sum. Let be two sets, then Because these sets correspond to systems of inequalities, they can be reduced by removing redundant constraints. Note that if C 1 = C 2 , then

Set Hadamard Products
Definition 1 Given two vertex representations V and V , we define the set Hadamard product operation using such representations as where • represents the Hadamard (coefficient-wise) product of the vectors.

Lemma 1 The set V = V • V introduced in the previous definition is a convex set that contains all possible combinations of products between elements of sets V and V .
Proof Given a convex set X with a vertex representation V , by definition we have which extends to multiple points [13] as Applying the Hadamard product, we obtain Simplifying, we obtain Note that in the case of the inequality representation, there is no direct result for this product. We therefore enumerate the sets in one of the polyhedra, and use linear solving algorithms to find an over-approximation: where t is a template direction for a face in the over-approximation, T is the set of directions selected for the over-approximation, and V is the set of vertices of X .

Vertex Enumeration
The vertex enumeration algorithm obtains a list of all vertices of a polyhedron, given a face representation of its bounding hyperplanes. Given the duality of the problem, it is also possible to find the bounding hyperplanes given a vertex description if the chosen algorithm exploits this duality. In this case the description of V is given in the form of a matrix inequal- Similarly, C can be described as a set containing each of its rows. There are two algorithms that efficiently solve the vertex enumeration problem. lrs [4] is a reverse search algorithm, while cdd [32] follows the double description method. In this work we use the cdd algorithm for convenience in implementation (the original cdd was developed for floats, whereas lrs uses rationals). The techniques presented here can be applied to either. Let be the polyhedral cone represented by C. The pair (C, V ) is said to be a double description pair if where V is called the generator of X . Each element in V lies in the cone of X , and its minimal form (smallest m) has a one-to-one correspondence with the extreme rays of X if the cone is pointed (i.e., it has a vertex at the origin). This last criterion can be ensured by translating a polyhedral description so that it includes the origin, and then translating the vertices back once they have been discovered (see Sect. 2.6). We also note that: The vertex enumeration algorithm starts by finding a base C K which contains a number of vertices of the polyhedron. This can be done by pivoting over a number of different rows in C and selecting the feasible visited points, which are known to be vertices of the polyhedron (pivoting p times will ensure at least one vertex is visited if the polyhedron is non-empty). C K is represented by C K which contains the rows used for the pivots. The base C K is then iteratively expanded to C K +i by exploring the ith row of C until C K = C. The corresponding pairs (C K +i , V K +i ) are constructed using the information from (C K , V K ) as follows. Let , and H 0 i ={x : C i, * x = 0}, be the spaces outside, inside and on the ith hyperplane and the existing vertices lying on each of these spaces. Then [32],

Abstract Acceleration-Overview of the Algorithm
Abstract acceleration is a method that seeks to precisely describe the dynamics of a transition system over a number of steps using a concise description between the first and final steps. More precisely, it looks for a direct formula to express the post-image of an unfolded loop from its initial states. Formally, given the dynamics in Eq. (1), an acceleration formula aims to compute the reach tube based on (3) using a function f such that f (·) = τ n (·). In the case of models without inputs, this equation can be derived from the expression x n = A n x 0 .

Overview of the Algorithm
The basic steps required to compute a reach tube using abstract acceleration are given in Fig. 3.
1. The process starts by performing eigendecomposition of the dynamics (based on matrix A) in order to transform the problem into a simpler one. Since we use unsound arithmetic at this stage, the results are quantities that are marked using a tilde (as inS,J). 2. The second step involves upper-bounding the rounding errors in order to obtain sound results: bounds on eigenvalues, for example, are well known from the literature and can be obtained as |λ −λ| < SJS −1 − A 2 . In general, a variety of off-the-shelf tools may be used, but since larger problems require numerical algorithms for scalability, all subsequent steps are performed using interval arithmetic in order to maintain soundness: we identify corresponding interval-based quantities with bold symbols (e.g., S, J), as well as subsequent matrix operations (e.g., computing the inverse of S). Thus we obtain SJS −1 − A 2 by extending the original unsound matrices by one least-significant bit element-wise. We also note that this equation is symmetric, which is why we use the estimated eigenvectors to calculate the error on the original eigenvalues (see [16] for further details). While bounds on the eigenvectors can be calculated from the eigenvalues, we choose a more complex yet faster over-approximation, which is described in [16]. 3. The inverse of the generalised eigenvectors (S −1 ) is calculated soundly (using interval arithmetic). 4. The problem is transformed into canonical form by multiplying both sides of the equation by S −1 (we use blackboard symbols to indicate interval vectors and matrices, which are employed to ensure sound operations), obtaining  5. We calculate the number of iterations n based on the guard set, as explained in Sect. 7. If there are no guards, we set n = ∞. This number need not be exact: if we over-approximate the number of iterations, the resulting reach tube will further overapproximate the desired one. 6. We over-approximate the dynamics subject to general inputs (for parametric inputs or in the absence of inputs this step will be ignored), using the techniques described in Sect. 6.2. 7. We calculate the accelerated dynamics using the techniques described in Sect. 5.1. 8. We transform the input and initial sets into vertex representation, to be used as the source for the reach tube calculation. 9. We employ a sound simplex algorithm [16] to evaluate the convex-set Hadamard product of the abstract dynamics and of the initial set. The two most important elements of the sound simplex are that it uses interval arithmetic to pivot, and that at every stage it verifies the intersection between vertices in order to avoid pivoting on unsound solutions. The latter step is better explained by considering that the simplex algorithm operates by visiting adjacent vertices. The interval arithmetic ensure that the solution at the last visited vertex is sound, but if there is an intersection, the new pivot may choose to start on the intersecting vertex instead (which is not sound), thus, by checking the intersection and extending the interval to encompass both vertices, we retain soundness (see [16] for details). 10. Since we have operated in the eigenspace so far, we transform the reach tube back into the state space via multiplication by S.

Abstract Matrices in Abstract Acceleration
We introduce the concept of an abstract matrix.
Definition 2 An abstract matrix A n ⊆ R p× p is an over-approximation of the union of the powers of the matrix A k such that A n ⊇ A k : k ∈ [0, . . . , n] . Its application to the initial set X 0 results inX such thatX n ⊇X n is an over-approximation of the reach tube described in Eq. (4).
Next we explain how to compute such abstract matrices. For simplicity, we first describe this computation for matrices A with real eigenvalues, whereas the extension to the complex case will be addressed in Sect. 4.1. Similar to [44], we first have to compute the Jordan normal form of A. Let A = S J S −1 where J is the normal Jordan form of A, and S is made up by the corresponding generalised eigenvectors. We can then easily compute A n = S J n S −1 , where given a set of r eigenvalues λ s with geometric multiplicities p s and s ∈ [1, . . . , r ], we have The abstract matrix A n is computed as an abstraction over a set of vectors m k ∈ R p , k ∈ [1, . . . , n] of distinct entries of J k , as explained below. Let I s = [ 1 0 · · · 0 ] ∈ R p s . The vector m k is obtained by the transformation ϕ −1 (which is always invertible) as If J is diagonal [44], then m k results in the vector made up of powers of the eigenvalues, The diagonal entries in the abstract matrix is thus bound by the intervals We observe that the spectrum of the abstract matrix σ (A n ), which can be derived from its entries in Eq. (12), over-approximates k∈ [1,...,n] σ ( A k ).
In the case of the sth Jordan block J s with geometric multiplicity p s > 1, observe that the first row of J n s contains all (possibly) distinct entries of J n s . Hence, the vector section m s is the concatenation of the (transposed) first row vectors λ n s , n of J n s . Since ϕ transforms the vector m into the shape of (10) of J n , it is called a matrix shape [44]. We then define the abstract matrix as where the constraint Φm ≤ f is synthesised from intervals associated to the individual eigenvalues and to their combinations. More precisely, we compute polyhedral relations: for any pair of eigenvalues (or distinct entries) within J, we find an over-approximation of the convex hull containing the points The reason for evaluating the convex hull over pairs of points is twofold. In the first instance, we note that the set m k | k ∈ [1, . . . , n] is, in general, not convex. This makes it hard to find its support in arbitrary directions. Ideal directions would be the normals to the gradients of the function, namely ∇m k , which would provide the tightest over-approximation at iteration k. However, as will be seen below, when combining negative or complex conjugate eigenvalues, the corresponding hyperplane tangent may intersect the set, and thus it cannot be used to define its convex hull. The second reason for choosing pairwise directions is practical: we need an even distribution of the directions in R p , and it is easier to do this in a pairwise manner. 1

Abstract Matrices in Complex Spaces
To deal with complex numbers in eigenvalues and eigenvectors, [44] employs the real Jordan form for conjugate eigenvalues λ = re iθ and λ * = re −iθ (θ ∈ [0, π]), so that Although this equivalence will be of use once we evaluate the progression of the model, calculating powers under this notation is often more difficult than handling directly the original matrices with complex values.
In the case of real eigenvalues we have abstracted the entries in the power matrix J n s by ranges of eigenvalues [min{λ 0 s · · · λ n s }, max{λ 0 s · · · λ n s }], forming a hypercube. In the complex case, where the rotations describe spherical behaviour, we can do something similar by rewriting eigenvalues into the polar form λ s = r s e iθ s and enclosing the radius in the interval [0, r s ], where r s = max{r k s : k ∈ [0, . . . , n]} (in the worst case scenario this is overapproximated by a hyper-box with λ k s ∈ [−r s , r s ] + [−r s , r s ]i, but we will introduce tighter bounds in the course of this work).

Using Support Functions for Abstract Acceleration
As an improvement over [44], the rows in Φ and f (see (13)) can be obtained by a refined sampling of the support functions of these sets. The choice of directions for these support functions results in an improvement over the logahedral abstractions used in previous work [37,38,44] (see Figs. 4,5, 6 and 7). This approach works thanks to the convex properties of the exponential progression. We consider five cases: 1. Positive Real Eigenvalues The exponential curve is cut along the diagonal between the eigenvalues with maximum and minimum range to create a supporting hyperplane. A third point taken from the curve is used to test the direction of the corresponding template vector. An arbitrary number of additional supporting hyperplanes are created by selecting pairs of adjacent points in the curve and creating the corresponding support functions, as illustrated in Fig. 4.

Complex Conjugate Eigenvalue Pairs
In the case of complex conjugate pairs, the eigenvalue map corresponds to a logarithmic spiral (Fig. 5). In this case, we must first extract the number of iterations (denoted by k) required for a full cycle. For convergent eigenvalues (|λ| < 1), only the first k iterations have an effect on the support functions, while in the divergent case only the last k iterations are considered (since symmetrically, it corresponds to the reverse spiral case). Support functions are found for adjacent pairs, checking the location of the first point for convergent eigenvalues, and that of the last point for divergent eigenvalues. If a point falls outside of the supporting half-space, we look for an interpolant point that closes the spiral and that is tangent to the origin. This last check is performed as a binary search over the remaining points in the circle (noting that the supporting planes would exclude the considered point) to achieve maximum tightness (Fig. 5).
Care is taken to ensure that subsequent iterations fall within the envelope found on the first/last rotation. This is ensured by extending the support functions outwards by a factor f = max {1, |λ|n cos(θ ) −1 } , where θ is the angle of the eigenvalue pair andn = n for the convergent case orn = 1 n for the divergent case. When this value is too large, we use an interpolation to find better supports. This is achieved by finding a pair such that the first point is obtained from λ k and the second from (λ 1 m ) mk+1 . The relaxation factor then becomes cos θ m −1 .

Equal Eigenvalues
When two eigenvalues are equivalent, the resulting support functions are those that are orthogonal to the x = y plane, intersecting the square created by the maximum and minimum values. 2 4. Jordan Blocks of Non-trivial Size (> 1) In the case of eigenvalues with geometric multiplicities, we find three shapes. When both elements in the pair are convergent (convex sets can be "sharp"), it is important to find the apex of the upper diagonals in order to minimise the over-approximation (Fig. 6). When both elements are divergent, the shape is similar to a positive valued pair since there is no extremum. Finally, when comparing different Jordan blocks, one convergent and one divergent, we evaluate the enclosing hyperbox, thus avoiding the change in convexity at the apex.

Negative Eigenvalues and Combinations of Real Eigenvalues with Conjugate Pairs
When comparing a positive real eigenvalue to a complex conjugate or a negative one, we must account for the changes of sign in the progression of the latter. We compute envelopes of the progression of the corresponding dynamics, which are obtained via convex overapproximations (cf. Fig. 7). In the case of complex eigenvalues, we use the absolute value in order to determine the envelope. If both eigenvalues have rotating dynamics, we would require full symmetry along the four quadrants, and thus we obtain a hyper-box with vertices at the farthest points from the origin.
An additional drawback of [44] is that calculating the exact Jordan form of any matrix is computationally expensive for high-dimensional matrices. We will instead leverage numerical algorithms that provide an approximation of the Jordan normal form and soundly account for the associated numerical errors. We use properties of eigenvalues to relax f by finding the maximum error in the calculations, which can be determined by computing the norm δ max = ŜĴŜ −1 − A , whereĴ andŜ are the eigenvalues and eigenvectors of A calculated numerically [16]. Recall that the notation above is used to represent interval matrices, and that all operations are performed using interval arithmetic with outward rounding in order to ensure soundness. The constraints in Φm ≤ f are then computed by considering the ranges of eigenvalues |λ s ± δ max | k , which are represented in Fig. 4 with blue circles. The outward relaxation of the support functions ( f ) follows a principle similar to that in [33], and reduces the tightness of the over-approximation, while ensuring the soundness of the obtained abstract matrix A n . Graphically, this is equivalent to moving the faces of a polyhedron outward, which practically has a minimal impact due to the small magnitude of δ max . It is also worth noting that the transformation matrices into and from the eigenspace will also introduce over-approximations due to the intervals, and will exacerbate the overapproximations due to the condition number related to the eigenvalues.
One can still use exact arithmetic with a noticeable improvement over previous work; however, for higher-dimensional models the option of using floating-point arithmetic (taking errors into account and meticulously setting rounding modes) provides a 100-fold improvement, which can render verification practically feasible. For a full description of the numerical techniques see [16].

Abstract Matrices and Support Functions
Since we are describing operations using abstract matrices and support functions, we briefly review the nature of these operations and the properties that support functions retain within this domain. Let X ∈ R p be a set and A ∈ R p× p an abstract matrix for the same space.

From Eq. (13) we have
where and, in order to simplify the nomenclature, we write 6 General Abstract Acceleration with Inputs

Acceleration of Parametric Inputs
Let us now consider the following over-approximation for τ on sets: and add a restriction to constant (also called parametric) inputs, namely u k = u 0 , ∀k > 0 and u 0 ∈ U . Unfolding (3) (ignoring the presence of the guard set G for the time being), we obtain We further simplify the sum n−1 k=0 A k BU , exploiting the following result from linear algebra.
If furthermore lim The inverse (I − A) −1 does not exist for eigenvalues equal to 1, i.e., we need 1 / ∈ σ ( A), where σ (A) is the spectrum (the set of all the eigenvalues) of matrix A. In order to address this problem, we introduce the eigendecomposition of A = S J S −1 , (and trivially I = SI S −1 ), and by the distributive and transitive properties we obtain Although (I − J) is still not invertible, this representation allows us to accelerate the eigenvalues individually, trivially noticing that n−1 k=0 1 k = n for unitary eigenvalues (thus eliminating the need to calculate said inverse for these eigenvalues). Using the properties above, and translating the problem into the generalised eigenspace to account for unit eigenvalues, we obtain the following representation: given and where gm(·) denotes the geometric multiplicity of the given eigenvalue, and k = j − i. With these notions in hand, we next define the abstract acceleration of parametric inputs.
Proof From Eq. (4) we havê Using Eq. (17), we expand this intô and finally obtainX n ⊆τ n (X 0 , U ). The quantities A n and B n are calculated using the techniques described in Sect. 5.1, where special consideration is taken to evaluate pairs comprising equal eigenvalues. Figure 8 gives an example of such a pair. Since both functions are monotonic, the set is convex. The techniques applied to positive real eigenvalues (see Sect. 5.1) therefore stands.

Acceleration of Time-Varying Inputs
In order to generalise the previous results for use with time-varying inputs, we will overapproximate the term BU over the eigenspace by a semi-spherical enclosure, namely a set where complex conjugate eigenvalues are enclosed by a spherical support with centre u c and the radius of U b , whereas real eigenvalues are enclosed by hyper-rectangles (dashed symbols represent elements in the eigenspace). To this end, we first rewrite where u c is the centre of the smallest hyperbox (interval hull) containing U J , and U d = {u : u + u c ∈ U J } as: We then over-approximate U d via U b , by the maximum radius in the directions of the complex eigenvalues (cf. illustration in Fig. 9). Let be the set of eigenvalues of A, where im(·) is the imaginary value of a complex number, and conjugate pairs are represented only by one member of the pair. Let us define the function and red(·) is a function that reduces the dimension of a vector by removing the elements where λ i / ∈ Λ (i.e. the null elements in . Extending this to matrices we have where r denotes the number of inequalities describing a set in R p . Finally, Since the description of U b is no longer polyhedral in R p , we will also create an overapproximation J b of J in the directions of the complex eigenvectors, in way that is similar to how we generated U b for U d . More precisely, and gm(·) is the geometric multiplicity of the specific Jordan block.

Definition 3
Given a matrix A = S J S −1 and a vector x, we define the following operations: Finally, we refer to the accelerated sets Returning to our original equation for the n-reach set, we obtain 3 Shifting our attention from reach sets to reach tubes, we can now over-approximate the reach tube by abstract acceleration of the summands in (24), as follows.  A i B, x , denotes an over-approximation of the n-reach tube, namelyX n ⊆ τ n (X 0 , U ).

Combining Abstract Matrices
Calculating the reach set from the set of initial states and that originating from the input set separately, and then adding them together, can result in coarse over-approximations. To minimise this effect, we apply abstract acceleration to the combined input-and-state spaces. One important property of the abstract matrices A n , B n and B n b is that they are related. In the case of parametric inputs, this correlation is linear and is covered by the acceleration defined in Lemma (2). In the case of B n b this relationship is not linear (see Eq. 21). However, we can still find a linear over-approximation of the relation between B n b and A n based on the time steps k.
Given two sets X ∈ R p and U ∈ R q and a transition equation Accelerating X k+1 , we obtain in the case of parametric inputs. More generally, the diagonal elements of D n correspond to the diagonal elements of A n and n−1 k=0 A k B, which means we can construct where A n and B n are the abstract matrices in Eqs. (13) and (19). We can then apply this abstraction to (21) and obtain: (22). This model provides a tighter over-approximation than (25), since the accelerated dynamics of the inputs are now coupled to the acceleration of the dynamical part of the model.

Example 1
In order to illustrate this, let us consider the one-dimensional model x k+1 = 0.5x+1 with initial state x 0 = 1. If we calculate A and B separately we getx

Abstract Acceleration with Guards: Estimation of the Number of Iterations
In the presence of spatial guards G, we are interested in estimating the number of iterations used to calculate the abstract matrices. Since we are dealing with reach sets, we differentiate between sets that are entirely inside the guard, sets that are crossing it, and sets that are entirely outside. The latter reach sets should never be propagated, whereas reach sets crossing guards should be made as tight as possible. Given a convex polyhedral guard expressed as the assertion G = {x : Gx ≤ h}, we define G i, * as the ith row of G and h i as the corresponding element of h. We denote the normal vector to the ith face of the guard as g i = G i . The distance of the hyperplane defined by the i-th guard to the origin is thus γ i = h i |g i | . Given a convex set X , we can now describe its position with respect to each face of the guard through the use of its support function alongside the normal vector to the hyperplane (for clarity, we assume that the origin is inside the set X ): Applying this to Eq. (24) we obtain: From the inequalities above we can determine the number of iterations n i for which the reach tube remains inside the corresponding hyperplane, and from which iteration n i the corresponding reach set goes beyond the guard.
Therefore, in order for a reach set to be inside the guard it must be inside all of its faces, and we can ensure it is fully outside of the guard set when it is fully beyond any of them. Thus, we have n = min{ n i } and n = min{ n i }.
We now discuss why these two cases are important. Looking at the transition in Eq. (1), we can easily derive that if Gx k h (i.e., the point lies outside at least one of the faces of the guard set), the post-image of all subsequent iterations of that point must not be included. As such, any over-approximation of the reach set will only add imprecision. Therefore, we will use the bounds n and n to create a tighter over-approximation. Let This double step prevents the set x : x ∈X n , x / ∈ X n from being included in further computations, thus improving the precision of the over-approximation.
Computing the maximum n i such that Eq. (28) is satisfied is not trivial, because the unknown n i occurs in the exponent of the equation. However, since an intersection with the guard set will always return a sound over-approximation, we do not need a precise value: we can over-approximate it by decomposing g i into the generalised eigenspace of A. More precisely, let where v j are row vectors of S −1 or −S −1 such that k i j ≥ 0, and res(g i ) is the component of the vector g i that lies outside the range of S, namely the subspace spanned by its columns. Notice that since by definition S always has an inverse, it is full rank and therefore res(g i ) = 0 and subsequently not relevant. It is also important to note that S is the matrix of the generalised eigenvectors of A, and that we are therefore expressing the guard in the generalised eigenspace of A. Thus, we obtain:

Overestimating the Number of Iterations of a Model Without Inputs
Since rotating dynamics and Jordan shapes will have a complex effect on the behaviour of the model, we seek to transform the Jordan form into a real positive matrix by using the absolute value of the eigenvalues. In such a case, the support function in each direction is monotonically increasing (or decreasing), and it is therefore very easy to find a bound for its progression. We note that the envelope (described by the absolute value) of rotating dynamics will always contain the true dynamics and is therefore a sound over-approximation. We will initially assume that γ i is positive and then extend to the general case.
Let ρ X 0 ( A n g i ) = ρ X 0 ( J n g i ), so that g i = S −1 g i and be the set of eigenvalues with distinct values (excluding conjugate pairs and geometric multiplicities).
Above, red(·) is a function that reduces the dimension p of a vector to p b by removing the elements λ i / ∈ Λ σ . This reduction is not strictly necessary, but it enables a faster implementation. Thus, given J = diag ( J s , s ∈ [1, . . . , p b ]), we have where x 2 is the maximum singular value [48] of the Jordan block J s . Finally, let and v σ = f σ (v), where x c is the Cartesian centre of X 0 and X cσ , an over-approximation of X 0 centred at x c . Using properties of eigenvalues and of singular values, we obtain , and therefore where k i j are the coefficients in Eq. (30). Since we have assumed to have no inputs, ρ U n c (g i ) + ρ U n b (g i ) = 0, hence we may solve for n i as: In order to separate the divergent parts of the dynamics from the convergent one, let us define This step will allow us to effectively track which trajectories are likely to hit the guard and when, since it is only the divergent element of the dynamics that can increase the size of the reach tube in a given direction. This condition requires that the set of initial conditions is also inside the guard, which is a reasonable assumption.
Substituting in Eq. (34), we obtain which allows us to finally formulate the following iteration scheme for under-approximating n.
Proposition 1 An iterative under-approximation of the number of iterations n can be computed by starting with n i = 1, and iterating over substituting n i = n on the right-hand side until we meet the inequality. (If n < 0 is obtained at the first iteration, then n i = 0 is the obtained solution.) Proof Notice that the sequence n i is monotonically increasing, before it breaks the inequality. As such, any local minimum represents a sound under-approximation of the number of loop iterations. Note that in the case where γ i ≤ 0, we must first translate the system coordinates such that γ i > 0. This is simply done by replacing x = x + c and operating over the resulting system where γ i = ρ c (g i ) + γ i . Mathematically this is achieved as follows: first we get c by finding the centre of the interval hull (see Eq. (20)) of G. 4 Next we transform the dynamics into

Underestimating the Number of Iterations of a Model Without Inputs
In order to apply a similar technique to (29), we must find an equivalent under-approximation. In the case of Eq. (34), the quantities σ j ensure that the equation diverges faster than the real dynamics, hence the estimation found is an upper bound to the desired iteration. In this case we want the opposite, hence we look for a model where the dynamics diverge slower. It is easy to show that λ b j = |λ j | represents these slower dynamics, where k − i j = min k i j ρ X cσ (−(v σ ) j ) , 0 and k + i j = max k i j ρ X cσ (−(v σ ) j ) , 0 .
An additional consideration must be made regarding the rotational dynamics. In the previous case we did not care about the rotational alignment of the set X n with respect to the vector g i , because any rotation would remain inside the envelope corresponding to the absolute value (r k cos(kθ) ≤ r k ). In the case of an under-approximation, although the magnitude of a complex eigenvalue at a given iteration may be greater than the support of the guard under verification, its angle with respect to the normal to the support vector may cause the corresponding point to remain inside the guard. We must therefore find iterations that are aligned with the normal to the guard, thus ensuring that the chosen point is outside it. In order to do this, let us first fix the magnitudes of the powered eigenvalues. In the case of convergent dynamics we will assume they have converged a full rotation to make our equation strictly divergent. Let θ = min{θ j , j ∈ [1, . . . , p]}, where θ j are the angles of the complex conjugate eigenvalues. Let n θ = 2π θ be the maximum number of iterations needed for any of the dynamics to complete a full turn. Then at any given turn |λ j | n i +n θ ≤ |λ j | n i +n , where |λ i | ≤ 1 and n ∈ [0, n θ ]. This means that any bound we find on the iterations will be necessarily smaller than the true value. Our problem becomes the solution to: The solution to the last equation is The second part of the equation is expected to be a positive value. When this is not the case, the dominating dynamics will have a rotation θ j ≥ π 2 . In such cases, we must explicitly evaluate the set up to 2π θ j + 1 iterations after n i , in order to ensure that we have covered a full rotation. If the resulting bound does not satisfy the original inequality: ρ X 0 ( A n i ) g i ≥ γ i , we replace n i = n until it does. 5

Proposition 2 An iterative under-approximation of the number of iterations n can be computed by starting with n i = 0 and iterating over
where k is the result of Eq. (38). We substitute for n i = n on the right-hand side as long as the first inequality holds, and then we can find a k such that the second inequality holds.
Since we are explicitly verifying the inequality, there is no further proof required.

Estimating the Number of Iterations of a Model with Inputs
For an LTI model with inputs, we will use the same paradigm explained in the previous section after transforming the system with inputs into an over-approximating system without inputs. Let X cσ , U cσ be the corresponding sets of initial states and inputs obtained by applying Eq. (32) to X 0 and U J , and let U J σ = (I − J σ ) −1 U cσ . The accelerated resulting system may be represented by the equations Let us now define (XU ) σ = {x − u | x ∈ X cσ , u ∈ U J σ }, which allows us to translate the system into which has the same shape as the equations in the previous section. We may now apply the techniques described above to find the bounds on the iterations.

Narrowing the Estimation of the Number of Iterations
The estimations above can be conservative, but we may obtain tighter bounds on the number of iterations. In the first instance, note that we have eliminated all negative terms in the sums in Eq. (36). Reinstating these terms can result in loss of monotonicity, but we may still create an iterative approach by fixing the negative value at intermediate stages. Let n i be our existing bound for the time horizon before reaching a guard, and k n i = where n k is the bound found in the previous stage. Some steps of this process will provide an unsound result, however, every second step will provide a monotonically increasing sound bound which will be tighter than the one in Eq. (36). Since the elements of the sums are convergent, we have that n i ≥ n k implies k n i ≥ k n k i.e., |k n i | ≤ |k n k | , thus log σ k n i + k n k ≥ log σ k n i + k n i , which means that n k in Eq. (42) is smaller than our n in Eq. (36) (n k ≤ n ≤ n i and n i ≥ n k ).
In the case of Eq. (39), the explicit evaluation of the guard at each cycle ensures the behaviour described here.

Maintaining Geometric Multiplicity
A second step in optimising the number of iterations comes from adding granularity to the bounding abstraction by retaining the geometric multiplicity using the matrix J b (see Eq. (22)).

Lemma 3 Given a matrix
From here we recursively expand the formula for A n− j−1 v s,i−1 and obtain: Let i denote the position of f b (λ j ) within the block J bs it belongs to, such that its corresponding generalised eigenvector is identified as v bs,i = f b (v j ). Then In order to manage the product on the right hand side we use slightly different techniques for over-and under-approximations. For n i we first find an upper bound n i using Eq. (36) and , and then do a second iteration using (n i − m), which ensures that the true value is below the approximation. In the case of n i , we also start with k i j = k i j 0 + k i j m and update it during the iterative process.

Example 2
Let us look at the following example, comprising matrices: The progression of the support function of the reach sets along this vector and the corresponding bounds, as described in the previous section, are shown in Fig. 10.
Changing the eigenvalues to: we obtain the results in Fig. 11. In this second case we can see that the rotational dynamics force an increase of the initially calculated iteration to account for the effects of the rotation.

Case Study
We have selected a known benchmark from the literature to illustrate the discussed procedure: the room temperature control problem [28]. The temperature (variable temp) of a room is  1,000,000 controlled via a user-defined input (set), which can be changed at any discrete time step through a heating (heat) element, and is affected by ambient temperature (amb) that is out of the control of the model. We formalise the description of such a system both via a linear loop and with a dynamical model. Observe that since such a system may be software controlled, Algorithm 1 gives a pseudo-code fragment for the temperature control problem.
We use the read function to represent non-deterministic values between 0 and the maximum value given as its argument. Alternatively, this loop corresponds to the following hybrid In The discussed over-approximations of the reach-sets indicate that the temperature variable intersects the guard set G at iteration n = 32. Considering the pseudo-eigenvalue matrix along these iterations, we use Eq. (13) to find that the corresponding complex pair remains within the following boundaries: The reach tube is calculated by multiplying these abstract matrices with the initial sets of states and inputs, as described in Eq. (25), by the following inequalities:  The negative values represent the lack of restriction in the code on the lower side and correspond to a cooling system (negative heating). The set is displayed in Fig. 12, where for the sake of clarity only 8 directions of the 16 constraints are shown. This results in a rather tight over-approximation which, for comparison's sake, is not looser than the convex hull of all reach sets obtained by [31] using the given directions. Figure 12 further displays the initial set in black, the collection of reach sets in white, the convex hull of all reach sets in dark blue (as computed by [31]), and finally the abstractly accelerated set in light yellow (dashed lines). The outer lines represent the guards for G.

Application of Abstraction-Refinement to Abstract Acceleration
One of the main limitations of abstract acceleration is that, despite being very fast, it leverages an over-approximation of the actual reach tube for verification. In many cases this overapproximation can be too coarse (i.e., imprecise) for the proof of the safety property of interest. This section deals with methods for refining this over-approximation. Refinements are based on counterexamples, namely vertices of the abstract matrix that lay outside the projection of the safety specification onto the abstract space (calculated using the inverse of the reachability transformations). Our approach can be seen as an instance of the known CounterExample Guided Abstraction Refinement (CEGAR) paradigm [20].

Finding Counterexample Iterations
Because the objective is to refine the abstract dynamics, we need to find the iterations corresponding to the counterexample (i.e., the ones used to calculate the hyperplanes forming the unsafe vertex). This will allow us to find an interpolant iteration that will reduce the polyhedron in the right direction. Since the abstract dynamics are built over pairs of eigenvalues, it is possible that different eigenvalue pairs provide different results for the counterexample iteration, in which case all of them are used. Let a verification clause explore the solution where v is the direction we are examining, s its corresponding support function, and ρ A (v) ≤ s the safety specification. If s > s the specification will not be met and we need a refinement. Let a v ∈ A be the vertex at which the maximum is found, i.e., a v · v = s. The iterations corresponding to this counterexample may be found by analysing the dynamics of each pair of eigenvalues independently, and finding the point closest to the hyperplane whose inequality has been violated. This is done as follows: 1. Conjugate Eigenvalues Since the trajectories along these are circular and centred at the origin, we can find the angle that a v forms with the axes of the eigenvalues and use it to calculate the right iteration. Let θ i be the angle of the conjugate eigenvalue pair and θ a v (i) the angle formed by a v in the ith plane (this is equivalent to tan −1 (a v ) i (a v ) i+1 ). The corresponding iteration will depend on whether the eigenvalue is convergent or divergent. In the former case, it will be θ av (i) θ i , and in the latter it will be (n − (n mod 1 where mod is the modulus operation over the reals. 2. Real Eigenvalues In the case of reals, finding the iteration relies on the direct relation between the given eigenvalue and the target counterexample.
If the logarithm does not exist, then we presume we cannot further refine using this method.

Jordan Blocks with Non-unitary Geometric Multiplicity
In the case of larger Jordan blocks we need to examine the nature of the dynamics. Let us look at the equation representing the contribution of a Jordan block to the support: In this case we must use an iterative approximation, as described in Sect. 7.4.1, to find the closest iteration to the unsafe guard. Although this process is more costly than the ones described above, it is also more precise, thus providing a much better refinement. Note that the technique can be applied to the full set of eigenvalues or to any subset of Jordan blocks. This choice is a compromise between precision and speed. We also note that when the refinement process is done in the eigenspace, the new eigenvectors are now the identity set, which makes the problem more tractable.
Since the exclusion of an unsafe vertex from the abstract dynamics does not ensure a sufficiently tight over-approximation, we must perform this step iteratively until either we run out of new refinements or reach a user-defined timeout.
Once the candidate iterations are found, it suffices to add further constraints to the abstract matrix for these iterations as described in Fig. 13. Notice that given the above procedure, it is often faster and more beneficial to begin by performing the refinement over the complex eigenvalues by directly examining the vector of directions v in the corresponding sub-spaces.

Using Bounded Concrete Runs
Owing to the complex interactions in the dynamics, the above procedure may not always find the correct iterations for refinement, or at least not optimal ones. For this reason, a second method is proposed, which in most cases will be more efficient and precise when the dynamics are strictly convergent.
This second approach relies on the direct calculation of the initial k iterations. Since we operate over the eigenvalues and we limit k to a conservative bound, this is a relatively inexpensive calculation. The approach leverages the idea that for convergent dynamics, counterexamples are often found in the initial runs. The first step is to directly calculate the trajectory of the counterexample for the first k iterations, and its corresponding support function in the direction of v. Once again, because this is a single point and a bounded time, this operation is comparatively inexpensive. The second step consists of finding an upper bound for all subsequent iterations, which we can do by using the norms of the eigenvalues and the peaks of each geometrical multiple of a Jordan Block (which relate to these norms). By selecting the larger between these two supports, we ensure soundness over the infinite time horizon. This is equivalent to evaluating the reach tube as Since the result above is known to be an upper bound for the support in the direction of v, we can directly add it to the inequalities of A.

Case Study
We have taken an industrial benchmark 'CAFF problem Instance: Building'. 7 The benchmark consists of a continuous model with 48 state variables and one input. Furthermore, there is an initial state corresponding to a 10-dimensional hyper-rectangle. Time is discretised using a 5 ms sample interval to give us a discrete time model for verification. Notice that the choice of sample time has very little effect on abstract acceleration. It mainly affects the requirement for floating point precision (as very small angles may require higher precision), and may have an effect on counterexample generation which can either decrease precision or increase time (i.e., we may relax the precision of the algorithm to gain speed or tightening at the cost of higher computation times). The provided model requires an analysis on the 25th variable, with a safety specification requiring it to remain below .005. The problem has been verified using SpaceEx 8 for bounded time (20 s) in under three seconds. Axelerator was run on this benchmark using different parameters. We used an Intel 2.6 GHz i7 processor with 8 GB of RAM running on Linux. Although the algorithm lends itself to concurrency, the tool currently supports only single threading. The process itself uses 82 MB on this particular benchmark. The results are summarised in Table 1. Since many tools in this area use unsound arithmetic, we present results for both sound and unsound arithmetic using abstract acceleration to quantify the cost of soundness. It is worth noting that for precisions under 1024 bits the tool returns soundness violation errors when using sound arithmetic. We note in these results that the performance of Axelerator depends largely on the required level of refinement. State of the art tools can do bounded model checking faster than Axelerator, largely due to implementation optimisations (we expect a better simplex engine would allow us to be more competitive in this regard). This advantage disappears as soon as we require a larger time horizon for verification.

Experimental Results
The overall Abstract Acceleration procedure has been implemented in C++ using the eigenalgebra package (v3.2), with multiple precision floating-point arithmetic, and has been tested on a 1.6 GHz Core 2 Duo computer. Unless otherwise specified, we use the sound version of Abstract Acceleration without abstraction-refinement (as per Sect. 8). The tool, called Axelerator, and the corresponding benchmarks used to test it (including specifications of the initial states, input ranges and guard sets), are available at http://www.cprover.org/LTI/ Since many tools require the specification of a set of directions in which to perform verification, we have chosen to use an octahedral template set. That is the set of vectors that run either along an axis or at a 45 degree angle between two axes (equivalent to a set of inequalities ±x i ≤ bound k or ±x i ±x j < bound k ). We also make use of interval hulls, which are defined as the smallest hyper-boxes that enclose a set (equivalent to ±x i ≤ bound k ).

Comparison with Other Unbounded-Time Approaches
In a first experiment we have benchmarked our implementation against the tools Inter-Proc [43] and Sting [22]. We have tested these tools on different discrete time models, including guarded/unguarded ones, stable/unstable ones, and models with complex/real loops with inputs (details in Table 2). In the first instance, we can see that Axelerator is more successful in finding tight over-approximations that remain very close to the actual reach set. InterProc and Sting are each unable to find nearly 40% of the bounds (i.e., supports), meaning they report an infinite reach-space in the corresponding directions. In these instances, InterProc (owing to limitations related to widening) and Sting (owing to the absence of tight polyhedral, inductive invariants) are unable to infer finite bounds at all. The precision provided by our implementation comes at a reasonable price in terms of computational effort: Axelerator is approximately 10 times slower than InterProc, and in most cases faster than Sting. Table 3 presents a comparison of our implementation using different levels of precision (long double, 256 bit, and 1024 bit floating-point precision) with the core procedure for abstract acceleration of linear loops without inputs (J) [44] (we use a version without inputs of the parabola, cubic and convoy models). This shows that our implementation gives tighter over-approximations on most benchmarks (column 'improved'). While on a limited number of instances the current implementation is less precise (the lower right portion of Fig. 4 where the dotted red line crosses into the inside of our support gives a hint why this is happening), the overall increased precision results from mitigating the limitation on chosen directions caused by the use of logahedral abstractions (as done in previous work [44]).
At the same time, our implementation is faster than [44], partly owing to the use of numeric eigendecomposition (as opposed to symbolic), but mostly in view of the cost of increasingly large rational representations in [44] needed to maintain precision when evaluating higher order systems. The fact that many bounds have improved with the new approach, while speed has increased by several orders of magnitude, provides evidence of the advantages of the new approach.
The speed-up is due to the faster Jordan form computation, which takes between 2 and 65 seconds for [44] (using the ATLAS package), whereas our implementation requires at most one second. For the last two benchmarks, the polyhedral computations blow up in [44], whereas our support function approach shows only moderately increasing runtimes.  number of bounds newly detected by Axelerator (mpfr) over the existing tools (IProc, Sting); Axelerator detects all bounds, therefore there are no lost bounds vs. existing tools.
IProc is [43]; Sting is [22]; mpfr is Axelerator (this work) using 256 bit mantissa ); here e is the accumulated error of the dynamical system; jcf: time taken to compute the Jordan form The increase of speed is owing to multiple factors, as detailed in Table 4. The difference in precision of using long double vs. multiple precision floating-point arithmetic is negligible, as all results in the given examples match exactly to 9 decimal places.

Comparison with Bounded-Time Approaches
In a third experiment, we compare our method with the LGG algorithm [51] used by SpaceEx [31]. Since LGG provides the tightest supports for a given set of directions, this study indicates the quality of our over-approximation. In order to set up a fair comparison, we have provided the implementation of the native algorithm in [51] within our software. We have run both methods on the convoyCar example [44] with inputs, which presents an unguarded, scalable (the dimension can be increased by adding more cars), stable loop with complex dynamics, and we have focused focused on octahedral templates. For convex reach tubes, the approximations computed by Abstract Acceleration are reasonably tight in comparison to those computed by the LGG algorithm (see Table 5). However, by storing finite disjunctions of convex polyhedra, the LGG algorithm is able to generate non-convex reach tubes, which are arguably tighter in case of oscillating or spiralling dynamics. Still, in many applications abstract acceleration can provide a tight over-approximation for the convex hull of (possibly non-convex) reach tubes. Table 5 provides the quantitative outcomes of this comparison. For simplicity, we present only the projection of the bounds along the variables of interest on a lower dimensional model. As expected, the LGG algorithm performs better in terms of tightness, however unlike our approach, its runtime distinctly increases with the number of iterations. Our implementation of LGG [51] using Convex Polyhedra with octahedral templates becomes slower than the abstractly accelerated version even for small time horizons (our implementation of LGG requires ∼ 4 ms for each iteration on a 6-dimensional problem with octahedral templates). Faster implementations of LGG will change the point at which the speed trade-off happens, but abstract acceleration will always be faster for long enough time horizons.
The evident advantage of abstract acceleration is its speed over finite horizons without much loss in precision, and of course the ability to prove properties for unbounded-time horizons. Table 6 shows a comparison between our approach and [49]. As can be seen, our approach is not only faster but much more precise. The reasons for this are many-fold. In terms of speed, the fact that [49] uses a dynamical model that is double the dimension of the one Table 5 Comparison on convoyCar2 benchmark [44], between this work and the LGG algorithm from [51]. Car acceleration Table 6 Comparison on convoyCar2 benchmark [44], between this work and the work in [49]. presented here and that the algorithmic complexity to manipulate it is O(n 3 ) (n being the model dimension) will result in slower computations, even when using sparse matrices.

Comparison with Alternative Abstract Acceleration Techniques
In terms of precision, creating an over-approximation around the centre of the input set, as opposed to the origin, makes a considerable difference, which is increased by the fact that circular over-approximations are contained within the interval hulls used in [49], and are therefore up to 4 3 n times smaller in volume. This last consideration is only relevant for rotating dynamics, whereas positive real eigenvalues exhibit the same precision in both approaches. Finally, notice that in the ConvoyCar2 example, where all original eigenvalues are convergent, the interval hull approach [49] creates one divergent eigenvalue, which causes a critical change in the behaviour of the dynamics that leads to unbounded results.

Scalability
Finally, in terms of scalability, we have an expected O( p 3 ) complexity with respect to the number of variables p, which derives from the simplex algorithm and the matrix multiplications in Eq. (25). We have parameterised the number of cars in the convoyCar example [44] (also seen in Table 3), and experimented with up to 33 cars (each car after the first requires three state variables, so that for example we obtain (33 − 1) × 3 = 96 variables for the 33-car case), using the same initial states for all cars. We report an average obtained from 10 runs for each configuration. These results demonstrate that our method can scale to industrial-size problems (Table 7).

Related Work
There are numerous approaches that tackle the safety verification problem for dynamical models. Time-bounded analysis is unsound in most cases, since it cannot reason about unbounded-time cases (we note that a proof that a given horizon is a completeness threshold [21] would restore such soundness but many tools do not attempt to find such a proof). Unbounded-time solutions are therefore preferred when soundness is required, although they are often either less precise or slower than bounded counterparts.

Time-Bounded Reachability Analysis
The first approach surrenders exhaustive analysis over the infinite time horizon, and restricts the exploration of model dynamics up to some given finite time bound. Decision procedures for bounded-time reachability problems have made much progress in the past decade, and computational algorithms with related software tools developed. Representatives are STRONG [25], HySon [12], CORA [2], HYLAA [5], Ariadne [7] and SpaceEx [31]. Set-based simulation methods generalise guaranteed integration of ODEs [11,53] from enclosing intervals to relational domains. They use precise abstractions with low computational cost to over-approximate sets of reachable states up to a given time horizon. Early tools employed polyhedral sets (HyTech [41] and PHAVer [30]), polyhedral flow-pipes [18], ellipsoids [10] and zonotopes [35]. A breakthrough has been achieved by [36,51], with the representation of convex sets using template polyhedra and support functions. This method is implemented in the tool SpaceEx [31], which can handle dynamical systems with hundreds of variables. It performs computations using floating-point numbers, which is a deliberate choice to boost performance. The downside of this choice is that its implementation is numerically unsound and therefore does not provide genuine formal guarantees, which is at the core of this work. More generally, most tools using eigendecomposition over a large number of variables (more than 10) happen to be numerically unsound owing to the use of unchecked floating-point arithmetic. Some tools (e.g., C2E2 [26]) add a robustness check to the reachability computations in order to restore soundness by over-approximating simulation-based reachability analysis. Another breakthrough in performance has been achieved by HYLAA [5], which is the first tool to solve high-order problems, including several with hundreds of state variables. [7,17] deal with non-linear dynamics (which are beyond the scope of this work), and the latter does so soundly. Other approaches focus on bounded model checking of hybrid automata, and use specialised constraint solvers (iSAT [27], HySAT [29]), or SMT encodings [19,39].

Unbounded Reachability Analysis
The second approach, epitomised by static analysis methods [40], explores unbounded-time horizons. It employs conservative over-approximations to achieve relative completeness (this ensures that we always find a fixpoint) and decidability over infinite time horizons.
Unbounded techniques attempt to infer or compute a loop invariant, i.e., an inductive set that includes all reachable states. If the computed invariant is disjoint from the set of bad states, this proves that the latter are unreachable and hence that the loop is safe. However, analysers frequently struggle to obtain an invariant that is precise enough, with acceptable computational costs. The problem is evidently exacerbated by non-determinism in the loop, which corresponds to the case of open systems (i.e., with inputs). Prominent representatives of this analysis approach include Passel [45], Sting [22] and abstract interpreters such as Astrée [8] and InterProc [43].
Early work in this area has used implementations of abstract interpretation and widening [23], which represent still the underpinnings of many modern tools. The work in [40] uses abstract interpretation with convex polyhedra over piecewise differential inclusions. A more recent approach uses a CEGAR loop to improve the precision of polyhedral abstractions [9] following an inductive sequence of half-space interpolants. Dang and Gawlitza [24] employ optimisation-based (max-strategy iteration) with linear templates for hybrid systems with linear dynamics. Relational abstractions [56] use ad-hoc "loop summarisation" of flow relations, while abstract acceleration focuses on linear relations analysis [37,38], which is common in program analysis.

Abstract Acceleration
Abstract acceleration [37,38,44] captures the effect of an arbitrary number of loop iterations with a single, non-iterative transfer function that is applied to the entry state of the loop (i.e., to the set of initial conditions of the linear dynamics). Abstract acceleration has been extended from its original version to encompass inputs over reactive systems [58] but restricted to subclasses of linear loops, and later to general linear loops but without inputs [44].
The work presented in this article mitigates these limitations by presenting abstract acceleration for general linear loops with inputs [14,15], developing numeric techniques for scalability and extending the domain to continuous time models.
The work in [50] puts forward improvements on [14], and its outcomes can be compared with the results in [15,16]. Indeed, [50] proposes an alternative approach to Abstract Acceleration with inputs, which relies on the expansion of the dynamical equations to a model that is four times the original size (dimension). Although this model is mathematically more tractable, it is slower and more imprecise than the one presented in [15]. The reason for this is that while the latter uses a semi-circular over-approximation of the variable inputs, the former uses a rectangular over-approximation of the dynamics, which is in turn expanded by the acceleration. In particular, this rectangular over-approximation of the dynamics may cause a convergent model to become divergent, which leads to unbounded supports. There are cases in which this rectangular over-approximation can give a tighter result, especially if it is implemented using the symmetry properties described in [15], thus a combined approach may be more efficient. Despite the computational overhead, the handling of the guards presented in [50] has potential advantages over our approach: its description is formal, and indeed our procedure is in general not complete, resulting on occasion in an inability to find reasonably precise bounds. However, in a typical case our procedure works well and in the experiments we have run, it has indeed always been able to find the optimal solution.

Conclusions and Future Work
We have presented a thorough development of the Abstract Acceleration paradigm to guarded LTI models (namely, conditional linear loops) with inputs. We have extended existing work, which dealt only with autonomous models (i.e., models without inputs). We have decisively shown that the new approach over-competes state-of-the-art tools for unbounded-time reachability analysis in both precision and scalability. In particular, on one hand we claim full soundness (both algorithmic and numerical) of the proposed procedure and of its implementation. On the other hand, the new approach is capable of handling general unbounded-time safety analysis for large-scale open models.
Nested loops are outside of the scope of this contribution, but represent relevant models that ought to be considered. Further work is also needed to extend the approach to non-linear dynamics, which we believe can be explored via hybridisation techniques [3]. We also plan to formalise the framework for general hybrid models, namely dynamical models with multiple discrete locations and location-dependent dynamics under multiple guards.