Incrementally Closing Octagons

The octagon abstract domain is a widely used numeric abstract domain expressing relational information between variables whilst being both computationally efficient and simple to implement. Each element of the domain is a system of constraints where each constraint takes the restricted form $\pm x_i \pm x_j \leq d$. A key family of operations for the octagon domain are closure algorithms, which check satisfiability and provide a normal form for octagonal constraint systems. We present new quadratic incremental algorithms for closure, strong closure and integer closure and proofs of their correctness. We highlight the benefits and measure the performance of these new algorithms.


Introduction
The view that simplicity is a virtue in competing scientific theories and that, other things being equal, simpler theories should be preferred to more complex ones, is widely advocated by scientists and engineers. Preferences for simpler theories are thought to have played a role in many episodes in science, and the field of abstract domain design is no exception. Abstract domains that have enduring appeal are typically those that are conceptually simple. Of all the weakly relational domains, for example, octagons [18] are arguably the most popular. One might claim that octagons are more elegant that, say, the two variable per inequality (TVPI) domain [27], and certainly they are easier to understand and implement. Yet one useful operation for this popular domain has remained elusive: incrementally updating an octagonal constraint system.
Inequalities in the octagon domain take the restricted form of˘x i˘xj ď c, where x i and x j are variables and c is a numerical constant. Difference bound matrices (DBMs) can be adapted to represent systems of octagonal constraints, for which a key family of operations is closure. Closure, in its various guises, provides normal forms for DBMs, allowing satisfiability to be observed and equality to be checked. Closure also underpins operations such as join and projection (the forget operator), hence the concept of closure is central to the design of the whole domain. Closure uses shortest path algorithms, such as Floyd-Warshall [10,30], to check for satisfiability. However, octagons can encode unary constraints, which require a stronger notion of closure, known as strong closure, to derive a normal form. Moreover, a refinement to strong closure, called integer closure, is required to detect whether an octagon has an integral solution.
A frequent use-case in program analysis is adding a single new octagonal constraint to a closed DBM and then closing the augmented system. This is incremental closure. Incremental closure not only arises when an octagon for one line is adjusted to obtain an octagon for the next: incremental closure also occurs in integer wrapping [26] which involves repeatedly partitioning a space into two (by adding a single constraint), closing and then performing translation. Incremental closure also appears to be useful in access-based localisation [21], which analyses each procedure using abstractions defined over only those variables it accesses. One way to adapt localisation to octagons [5] is to introduce fresh variables, called anchors, that maintain the relationships which hold when a procedure is entered. One anchor is introduced for each variable that is accessed within the procedure. The body of the callee is analysed to capture how a variable changes relative to its anchor, and then this change is propagated into the caller. The abstraction of the callee is amalgamated with that of the caller by replacing the variables in the caller abstraction with their anchors, imposing the constraints from the callee abstraction, and then eliminating the anchors. If there are only a few non-redundant constraints in the callee [2] then incremental closure would be attractive for combining caller and callee abstractions.
In SMT solving, difference logic [20] is widely supported, suggesting that an incremental solver for the theory of octagons [24] would also be useful. Furthermore afield in constraint solving, relational and mixed integer-real abstract domains show promise for enhancing constraint solvers [22] and octagons have been deployed for solving continuous constraints [23]. In this context, a split operator is used to divide the solution space into two sub-spaces by adding opposing constraints such as x i´xj ď c and x j´xi ď´c. Splitting is repeatedly applied until a set of octagons is derived that cover the entire solution space, within a given precision tolerance. Propagation is applied after every split, which suggests incremental closure, and a scheme in which incremental closure is applied whenever a propagator updates a variable.
Closing an augmented DBM is less general than closing an arbitrary DBM and therefore one would expect incremental closure to be both efficient and conceptually simple. However the running time of the algorithm originally proposed for incremental closure [17,Section 4.3.4] is cubic in the number of variables (see Section 4.1 for an explanation of the impact of checks). A quadratic algorithm has been proposed [8], but the algorithm missed a form of propagation and therefore did not always close the DBM (see Section 4.2 for a discussion).
The algorithms presented in this paper stem from the desire to provide simple correctness proofs. The act of restructuring and simplifying the proofs for [8], exposed a degenerate form of propagation and suggested new algorithmic solutions. These new algorithms yield three different points in the design-space continuum: ranging from a short incremental closure algorithm that performs strengthening as a separate post-processing step; to a longer queue-based algorithm that performs strengthening on-the-fly. All three algorithms significantly outperform the incremental algorithm of Minè [17,Section 4.3.4], whilst entirely recovering closure.

Contributions
We summarise the contributions of our work as follows: -Using new insights, we present new incremental algorithms for closure, strong closure and integer closure (Section 4, Section 5 and Section 6 respectively). -We prove our algorithms correct and show how proofs for existing closure algorithms can be simplified. -We give detailed proofs for in-place versions of our algorithms (Section 7). -We implement these new algorithms which show significant performance improvements over existing closure algorithms (Section 8).
The paper is structured as follows: Section 2 contextualises this study and Section 3 provides the necessary preliminaries. Section 4 critiques the incremental algorithm of Minè, introduces a new incremental quadratic algorithm. Section 5 shows how to recover strong closure incrementally and do so, again, in a single DBM pass. Section 6 explains how to extend incrementally to integer closure. Section 7 suggest various optimisations to the incremental algorithms including in-place update. Experimental results are presented in Section 8 and Section 9 concludes.

Related Work
Since the thesis of Minè [17] and his subsequent magnum opus [18], algorithms for manipulating octagons, and even their representations, have continued to evolve. Early improvements showed how strengthening, the act of combining pairs of unary octagon constraints to improve binary octagon constraints, need not be applied repeatedly, but instead can be left to a single post-processing step [2]. This result, which was justified by an inventive correctness argument, led to a significant performance improvement of approximately 20% [2]. Showing that integer octagonal constraints admit polynomial satisfiability represented another significant advance [1], especially since dropping either the two variable or unary coefficient property makes the problem NP-complete [15]. Octagonal representations have come under recent scrutiny [14,Chapter 8]. In Coq, it is natural to realise DBMs as map from pairs of indices (represented as bit sequences) to matrix entries. Look-up becomes logarithmic in the dimension of the DBM, but the DBM itself can be sparse. Strengthening, which combine bounds on different variables, can populate a DBM with entries for binary constraints. Dropping strengthening thus improves sparsity, albeit at the cost of sacrificing a canonical representation. Join can be recovered by combining bounds during join itself, in effect, strengthening on-the-fly. Quite independently, sparse representations have recently been developed for differences [11]. Further field, Opmnq decision procedures have been proposed for unit two variable per inequality (UTVPI) constraints [16] where m and n are the number of constraints and variables respectively. Subsequently an incremental version was proposed for UTVPI [25] with time complexity Opm`n logpnq`pq where p is the number of constraints tightened by the additional inequality. Certifying algorithms have also been devised for UTVPI constraints [29], supported by a graphical representation of these constraints, which aids the extraction of a certificate for validating unsatisafability. DBMs, however, offer additional support for other operations that arise in program analysis such as join and projection. Moreover, there is no reason why each DBM entry could be augmented with a pair of row and column coordinates which records how it was updated, allowing a proof for unsatisfability to be extracted from a negative diagonal entry.
Recent work [28] has proposed factoring octagons into independent sub-systems, which reduces the size of the DBM. Domain operations are applied point-wise to the independent sub-matrices of the DBM, echoing [12]. The work also shows how the regular access patterns of DBMs enable vectorisation, the step beyond which is harnessing general purpose GPUs [3]. Packs [7] have also been proposed as a factoring device in which the set of programs variables is covered by a sets of variables called packs (or clusters). An octagon is computed for each pack to abstract the DBM as a set of low-dimensional DBMs. Recent work has explored how packs can be introduced automatically using preanalysis and machine learning [13].
The alternative to simplifying the DBM representation is to assume that the DBM satisfies some prerequisites so that a domain operation need not be applied in full generality. Minè [17] showed that incremental version of the closure could be derived by observing that a new constraint is independent of the first c variables of the DBM. This paper stems from an earlier work [8] that adopts an incremental algorithm for disjunctive spatial constraints [4] to DBMs. The work was motivated by the desire to augment [8] with conceptually simple correctness proofs, that revealed a deficiency in [8] which prompted a more thorough study of incrementality.

Preliminaries
This section serves as a self-contained introduction to the definitions and concepts required in subsequent sections. For more details, we invite the reader to consult both the seminal [17,18] and subsequent [2] works on the octagon abstract domain.

The Octagon Domain and its Representation
An octagonal constraint is a two variable inequality of the form˘x i˘xj ď d where x i and x j are variables and d is a constant. An octagon is a set of points satisfying a system of octagonal constraints. The octagon domain is the set of all octagons that can be defined over the variables x 0 , . . . , x n´1 .
Implementations of the octagon domain reuse the machinery developed for solving difference constraints of the form x i´xj ď d. Minè [18] showed how to translate octagonal constraints to difference constraints over an extended set of variables x 1 0 , . . . , x 1 2n´1 . A single octagonal constraint translates into a conjunction  of one or more difference constraints as follows: A common representation for difference constraints is a difference bound matrix (DBM) which is a square matrix of dimension nˆn, where n is the number of variables in the difference system. The value of the entry d " m i,j represents the constant d of the inequality x i´xj ď d where the indices i, j P t0, . . . , n´1u. An octagonal constraint system over n variables translates to a difference constraint system over 2n variables, hence a DBM representing an octagon has dimension 2nˆ2n.
Example 1 Figure 1 serves as an example of how an octagon translates to a system of differences. The entries of the upper DBM correspond to the constants in the difference constraints. Note how differences which are (syntactically) absent from the system lead to entries which take a symbolic value of 8. Observe too how that DBM represents an adjacency matrix for the illustrated graph where the weight of a directed edge abuts its arrow.
The interpretation of a DBM representing an octagon is different to a DBM representing difference constraints. Consequently there are two concretisations for DBMs: one for interpreting differences and another for interpreting octagons, although the latter is defined in terms of the former: Definition 3.1 Concretisation for rational pQ n q solutions: where the concretisation for integer pZ n q solutions can be defined analogously. Example 2 Since octagonal inequalities are modelled as two related differences, the upper DBM contains duplicated entries, for instance, m 1,2 " m 3,0 .
Operations on a DBM representing an octagon must maintain equality between the two entries that share the same constant of an octagonal inequality. This requirement leads to the definition of coherence: Definition 3.2 (Coherence) A DBM m is coherent iff @i.j.m i,j " m,ī wherē ı " i`1 if i is even and i´1 otherwise. Example 3 For the upper DBM observe m 0,3 " 6 " m 2,1 " m3 ,0 . Coherence holds in a degenerate way for unary inequalities, note m 2,3 " 4 " m 2,3 " m3 ,2 .
The bar operation can be realised without a branch usingī " i xor 1 [17, Section 4.2.2]. Care should be taken to preserve coherence when manipulating DBMs, either by carefully designing algorithms or by using a data structure that enforces coherence [17,Section 4.5]. For clarity, we abstract away from the question of how to represent a DBM by presenting all algorithms for square matrices, rather than triangular matrices as introduced in [17, Section 4.5]. One final property is necessary for satisfiability:

Definitions of Closure
Closure properties define canonical representations of DBMs, and can decide satisfiability and support operations such as join and projection. Bellman [6] showed that the satisfiability of a difference system can be decided using shortest path algorithms on a graph representing the differences. If the graph contains a negative cycle (a cycle whose edge weights sum to a negative value) then the difference system is unsatisfiable. The same applies for DBMs representing octagons. Closure propagates all the implicit (entailed) constraints in a system, leaving each entry in the DBM with the sharpest possible constraint entailed between the variables. Closure is formally defined below: Example 4 The top right DBM in Figure 1 is not closed. By running an all-pairs shortest path algorithm we get the following DBM: » --- Notice that the diagonal values have non-negative elements implying that the constraint system is satisfiable. Running shortest path closure algorithms propagates all constraints and makes every explicit all constraints implied by the original system. Once satisfiability has been established, we can set the diagonal values to zero to satisfy the definition of closure.
xx`yỳ´xx`yỳ4 Closure is not enough to provide a canonical form for DBMs representing octagons. Minè defined the notion of strong closure in [17,18] to do so: The strong closure of DBM m can be computed by Strpmq, the code for which is given in Figure 4. The algorithm propagates the property that if i using the two unary constraints encoded by Note that this constraint propagation is not guaranteed to occur with a shortest path algorithm since there is not necessarily a path from a m i,ī and m ,j . An example in Figure 2 shows such a situation: the two graphs represent the octagon, but a shortest path algorithm will not propagate constraints on the left graph; hence strengthening is needed to bring the two graphs to the same normal form. Strong closure yields a canonical representation: there is a unique strongly closed DBM for any (non-empty) octagon [18]. Thus any semantically equivalent octagonal constraint systems are represented by the same strongly closed DBM. Strengthening is the act of computing strong closure. Example 5 The lower right DBM in Figure 1 gives the strong closure of the upper right DBM. Strengthening is performed after the shortest path algorithm.
For octagonal constraints over integers, the strong closure property may result in non-integer values due to the division by two. The definition of strong closure for integer octagonal constraints thus needs to be refined. If x i is integral then x i ď c tightens to x i ď tcu. Since x i ď c translates to the difference 2i`1 ď 2c, tightening the unary constraint is achieved by tightening the difference to x 1 2i´x 1 2i`1 ď 2tc{2u. Definition 3.6 (Tight closure) A DBM m is tightly closed iff -m is strongly closed -@i.m i,ī is even For the integer case, a tightening step is required before strengthening. Tightening a closed DBM results in a weaker form of closure, called weak closure. Strong closure can be recovered from weak closure by strengthening [1]. Note, however, that we introduce the property for completeness because our formalisation and proofs do not make use of this notion.   Figure 3 gives a high-level overview of closure calculation. First a closure algorithm is applied to a DBM. Next, consistency is checked by observing the diagonal has non-negative entries indicating the octagon is satisfiable. If satisfiable, then the DBM is strengthened, resulting in a strongly closed DBM. Note that consistency does not need to checked again after strengthening. The dashed lines in the figure show the alternative path taken for integer problems: to ensure that the DBM entries are integral, a tightening step is applied which is then followed by an integer consistency check and strengthening. Figure 4 shows how this architecture can be instantiated with algorithms for non-incremental strong closure. A Floyd-Warshall all-pairs shortest path algorithm [10,30] can be applied to a DBM to compute closure, which is cubic in n. The check for consistency involves a pass over the matrix diagonal to check for a strictly negative entry, as illustrated in the figure. (Note that CheckConsistent resets a strictly positive diagonal entry to zero as in [2,18], but the incremental algorithms presented in this paper never relax a zero diagonal entry to a strictly positive value. Hence the reset is actually redundant for the incremental algorithms that follow.) The consistency check is linear in n. Strong closure can be additionally obtained by following closure with a single call to Str, the code for which is also listed in the figure. This is quadratic in n. 1: function Close(m) 2:

High-level Overview
for k P t0, . . . , 2n´1u do 3: for i P t0, . . . , 2n´1u do 4: for j P t0, . . . , 2n´1u do 5: end for 7: end for 8: end for 9: return m 1 10: end function 1: function Str(m) 2: for i P t0, . . . , 2n´1u do 3: for j P t0, . . . , 2n´1u do 4: end for 6: end for 7: return m 1 8: end function 1: function CheckConsistent(m) 2: for i P t0, . . . , 2n´1u do 3: if m i,i ă 0 then 4: return false 5: end for 9: return true 10: end function We are interested in the specific use case of adding a new octagonal constraint to an existing octagon. Minè designed an incremental algorithm for this very task, which can be refactored into computing closure and then separately strengthening, as depicted in Figure 3. His incremental algorithm, and a refinement, are discussed in Section 4.1. Section 4.2 presents our new incremental algorithm with improved performance. b ď d to a closed DBM. The algorithm relies on the observation that updating m a,b and mb ,ā will only (initially) mutate the rows and columns for the x 1 Since m was closed, despite the updates, it still follows that m i,j ď m i,k`mk,j if ti, j, kuXta,ā, b,bu " H. To restore closure it only remains to enforce m i,j ď m i,k`mk,j for ti, j, ku X ta,ā, b,bu ‰ H. The incremental algorithm thus applies Floyd-Warshall closure but only updates an entry m i,j when ti, j, ku X ta,ā, b,bu ‰ H (lines 7 and 8).
Note that the check ti, j, ku X ta,ā, b,bu ‰ H can be decomposed into three separate checks i P ta,ā, b,bu, j P ta,ā, b,bu or k P ta,ā, b,bu. Then k P ta,ā, b,bu can be hoisted outside the two inner loops, and likewise i P ta,ā, b,bu can be hoisted for k P t0, . . . , 2n´1u do 5: for i P t0, . . . , 2n´1u do 6: for j P t0, . . . , 2n´1u do 7: if ti, j, ku X ta,ā, b,bu ‰ H then 8: end if 10: end for 11: end for 12: end for 13: return m 14: end function outside the inner loop. Furthermore, i P ta,ā, b,bu can be reduced to four constanttime equality checks pi " aq _ pi "āq _ pi " bq _ pi "bq. These strength reductions mitigate against overhead of the check ti, j, ku X ta,ā, b,bu ‰ H itself.
This guard reduces the number of min operations from 8n 3 to 8n 3´p 2n´4q 3 = 48n 2´9 6n`64 (notwithstanding those in Str), but at the overhead of 8n 3 checks. Thus the incremental algorithm in quadratic in the number of min operations but cubic in the number of checks (even with code hoisting).

Improved Incremental Closure
To give the intuition behind our new incremental closure algorithm, consider adding the constraint b´x 1ā ď d respectively. Note that we cannot have the two paths from x 1 i to x 1 a and from x 1 b to x 1 j both shortened: at most one of them can change. The same holds for the two paths from x 1 i to x 1 b and x 1ā to x 1 j . These extra paths lead to the following strategy for updating m 1 i,j : Fig. 6: Four ways to reduce the distance between x 1 i and x 1 j This leads to the incremental closure algorithm listed in top of Figure 7. Quintic min can be realised as four binary min operations, hence the total number of binary min operations required for IncClose is 16n 2 , which is quadratic in n.
The listing in the bottom of the figure shows how commonality can be factored out so that each iteration of the inner loop requires a single ternary min to be computed. Factorisation reduces the number of binary min operations to 2np24 nq = 8n 2`4 n in IncCloseHoist. Moreover, this form of code hoisting is also applicable algorithms that follow (though this optimisation is not elaborated in the sequel). Furthermore, like IncClose, IncCloseHoist is not sensitive to the specific traversal order of the DBM, hence has potential for parallelisation. In additional, both IncClose and IncCloseHoist do not incur any checks. Example 6 To illustrate how the incremental closure algorithm of [8], from which the above is derived, omits a form of propagation, consider adding x 0´x1 ď 0, or equivalently x 1 0´x 1 2 ď 0, to the system on the left whose DBM m is given on right. The system is illustrated spatially on the left hand side of Figure 8; the right hand side of the same figure shows the effect of for i P t0, . . . , 2n´1u do 3: for j P t0, . . . , 2n´1u do 4: end for 6: end for 7: if CheckConsistent(m 1 ) then 8: return m 1 9: else 10: return f alse 11: end if 12: end function for j P t0, . . . , 2n´1u do 8: end for 10: if m 1 i,i ă 0 then 11: return f alse 12: end if 13: end for 14: return m 1 15: end function adding the constraint x 0´x1 ď 0. Adding x 0´x1 ď 0 using the incremental closure algorithm from [8] gives the DBM m 1 ; IncClose gives the DBM m 2 : The DBM m 1 represents the constraint x ď 7 2 but m 2 encodes the tighter constraint x ď 0. The reason for the discrepancy between entries m 1 0,1 and m 2 0,1 is shown by the following calculations: The entry at m 1 0,1 is calculated using m 2,1 , but this entry will itself reduce to 0; m 1 0,1 must take into account the change that occurs to m 2,1 . More generally, when calculating m 1 i,j , the min expression of [8] overlooks how the added constraint can The new incremental algorithm is justified by Theorem 4.1 which, in turn, is supported by the following lemma: There are 5 cases for m 1 i,k and 5 for m 1 k,j giving 25 in total: Because m is closed and by Lemma 4.1: Because m is closed and by Lemma 4.1: Because m is closed and by Lemma 4.1: Because m is closed and by Lemma 4.1: Because m is closed and by Lemma 4.1: Because m is closed and by Lemma 4.1: mā ,a`d`mb,k and m 1 k,j " m k,a`d`mb,b`d`mā,j . Because m is closed and by Lemma 4.1: Because m is closed and by Lemma 4.1: Note that unsatisfiability can be detected without applying any min operations at all, but we omit this in our algorithms. This is justified by the following corollary of Lemma 4.1:

Properties of Incremental Closure
By design IncClose recovers closure, but it should also natural for the algorithm to preserve and enforce other properties too. These properties are not just interesting within themselves; they provide shaffolding for results that follow.

Idempotence
An important property of IncClose is idempotence: it formalises the idea that an octagon should not change shape if it is repeatedly intersected with the same inequality. If idempotence did not hold then there would exist m 1 " IncClosepm, oq and m 2 " IncClosepm 1 , oq for which m 1 ‰ m 2 . This would suggest that IncClose did not properly tighten m using the inequality o, but overlooked some propagation, which is the form of suboptimal behaviour we are aim to avoid.
Using the same inequalities it follows

Monotonicity
Another key property of our incremental closure is monotonicity under the pointwise ordering, that is, m ď m 1 iff m i,j ď m 1 i,j for all 0 ď i, j ď 2n. IncClosepm, oq ď IncClosepm 1 , oq

Coherence
The following propositional shows that IncClose preserves coherence. The force of this rest is that it is not necessary to enforce coherence as a post-processing step, or even on-the-fly as incremental closure is applied.

Proof
Similar to the previous case.

Incremental Strong Closure
We now turn our attention from recovering closure to recovering strong closure, which generates a canonical representation for any (non-empty) octagon.

Classical Strong Closure
The classical strong closure by Minè repeatedly invokes Str within the main Floyd-Warshall loop, but it was later shown by Bagnara et al. [2] that this was equivalent to applying Str just once after the main loop. The following theorem [2, Theorem 3] justifies this tactic, though the proofs we present have been revisited and streamlined: Theorem 5.1 Suppose m is a closed, coherent DBM and m 1 " Strpmq. Then m 1 is a strongly closed DBM.
Proof Observe that m 1 i,ī " minpm i,ī , pm i,ī`mi,ī q{2q " m i,ī and likewise m 1 Because m is closed 0 " m i,i ď m i,ī`mī,i and thus k,j we proceed by case analysis: -Suppose m 1 i,k " m i,k and m 1 k,j " m k,j . Because m is closed: Because m is closed and coherent: Because m is closed:

Properties of Strong Closure
We establish a number of properties about Str which will be useful when we prove in-place versions of our incremental strong (and tight) closure algorithms. Proof Let m 2 " Strpm 1 q. Observe m 1 i,ī " minpm i,ī , pm i,ī`mi,ī q{2q " m i,ī and likewise m 1 ,j " m ,j . Therefore

Incremental Strong Closure
Theorem 5.1 states that a strongly closed DBM can be obtained by calculating closure and then strengthening. This is realised by calling IncClose, from Figure 7, followed by a call to Str. Although this is conventional wisdom, it incurs two passes over the DBM: one by IncClose and the other by Str. The two passes can be unified by observing that strengthening m 1 critically depends on the entries m 1 i,ī where i P t0, . . . , 2n´1u. Furthermore, these entries, henceforth called key entries, are themselves not changed by strengthening because: This suggests precomputing the key entries up front and then using them in the main loop of IncClose to strengthen on-the-fly. This insight leads to the algorithm listed in Figure 9. Line 3 generates the key entries which are closed by construction and unchanged by strengthening. Once the key entries are computed, the algorithm iterates over the rest of the DBM, closing and simultaneously strengthening each entry m i,j at line 8. The total number of binary min operations required for IncStrongClose is 8n`10np2n´1q " 20n 2´2 n, which improves on following IncClose by Str, which requires 16n 2`4 n 2 " 20n 2 . Furthermore, since m is coherent m i,a`d`mb,ī " mā,ī`d`m i,b " m i,b`d`mā ,ī so that the quintic min on line 4 becomes quartic, reducing the min count for IncClose to 20n 2´4 n. Furthermore, the entry m i,ī can be cached in a linear array a i of dimension 2n and the expression pm 1 in line 8 can be replaced with pa i`a q{2, thereby avoiding two lookups in a twodimensional matrix. We omit the algorithm using array caching for space reasons as this is a simple change to Figure 9.
The following theorem justifies the correctness of the new incremental strong closure algorithm: Proof We prove that @i, j.m 1 i,j " m˚i ,j . Pick some i, j.
-Suppose j "ī. Then m˚i ,ī " minpm : The strong closure algorithms previously presented have to be modified to support integer octagonal constraints. If x i is integral then x i ď c can be tightened to x i ď tcu. Since x i ď c is represented as the difference x 1 2i´x 1 2i`1 ď 2c, tightening is achieved by sharpening the difference to x 1 2i´x 1 2i`1 ď 2tc{2u, so that the constant 2tc{2u is even. This is achieved by applying Tightenpmq, the code for which is given in Figure 10. As suggested by Figure 3, closure does not need to be reapplied after tightening to check for consistency; it is sufficient to check that m i,ī`mī,i ă 0 [2], which is the role of CheckZConsistentpmq. One subtlety that is worthy of note is that after running tightenpmq on a closed DBM m, the resulting DBM will not necessarily be closed but will instead satisfy a weaker property, namely weak closure. Strong closure can be recovered from weak closure, however, by strengthening [2]. However, we do not use this approach in the sequel: instead we use tightening and strengthening together to avoid having to work with weakly closed DBMs . First we prove that tightening followed by strengthening will return a closed DBM when the resulting system is satisfiable: Lemma 6.1 Suppose m is a closed, coherent integer DBM. Let m 1 be defined as follows: Then m 1 is either closed or it is not consistent.
Suppose m 1 i,k ‰ m i,k and m 1 k,j " m k,j . (a) Suppose mk ,k is even. Because m is closed and coherent: Because m is closed and coherent: ii. Suppose mk ,k`2 m k,j ą m ,j . Thus pmk ,k´1 q`2m k,j ě m ,j Symmetric to the previous case. 4. Suppose m 1 i,k ‰ m i,k and m 1 k,j ‰ m k,j . Then Since m is closed and m 1 is consistent: Using the proof that tighten and strengthening gives a closed DBM, we can now show that the resulting DBM is also tightly closed: Then m 1 is either tightly closed or it is not consistent.
Proof Suppose m 1 is consistent. By theorem 6.1 we know that m 1 is closed.
We will now show that m 1 is strongly closed i.e @i, j.m 1 -Suppose m 1 i,ī " m i,ī and m 1 ,j " m ,j . Then Thus, if m 1 is consistent, it is strongly closed. It remains to show that @i.m 1 i,ī is even. Observe that: [ \ Notice that the proof of tight closure does not use the concept of weak closure as advocated in [2]. The above proof goes directly from a closed DBM to a tightly closed DBM relying only on simple algebra; it is not based on showing that tightening gives a weakly closed (intermediate) DBM which can be subsequently strengthen to give a tightly closed DBM (see Figure 3).
Tight closure requires the key entries, and only these, to be tightened. This suggests tightening the key entries on-the-fly immediately after they have been computed by closure. This leads to the algorithm given in Figure 11 which coincides with IncStrongClosepmq except in one crucial detail: line 4 tightens the key entries as they are computed. Moreover the key entries are strengthened, with the other entries of the DBM, in the main loop in tandem with the closure calculation, thereby ensuring strong closure. Thus tightening can be accommodated, almost effortlessly, within incremental strong closure.
Proof We prove that @i, j.m i,j " m 1 i,j . Pick some i, j.

Properties of Tight Closure
We prove a number of properties about Tighten which will be useful when we justify the in-place versions of our incremental tight closure algorithm. Proof Let m 2 " Tightenpm 1 q.

Coherence
Proposition 6.4 Let m be a coherent DBM and m 1 " Tightenpmq. Then m 1 is coherent.
Proof -Suppose j "ī. Then m 1 ,ī " 2 In-place Update Closure algorithms are traditionally formulated in a way that is simple to reason about mathematically (see [17,Def 3.3.2]), typically using a series of intermediate DBMs and then present the algorithm itself using in-place update (see [17,Def 3.3.3]). The question of equivalence between the mathematical formulation and the practical in-place implementation is arguably not given the space it should. for i P t0, . . . , 2n´1u do 3: for j P t0, . . . , 2n´1u do 4: end for 6: if m i,i ă 0 then 7: return f alse 8: end if 9: end for 10: return m 11: end function Miné, in his magnus opus [17], merely states that equivalence can be shown by using an argument for the Floyd-Warshall algorithm [9, Section 26.2]. However that in-place argument is itself informal. Later editions of the book do not help, leaving the proof as an exercise for the reader. But the question of equivalence is more subtle again for incremental closure. Correctness is therefore argued for incremental closure in Section 7.1, incremental strong closure in Section 7.2 and incremental tight closure in Section 7.3, one correctness argument extending another.
7.1 In-place Incremental Closure Figure 12 gives an in-place version of IncClose algorithm listed in Figure 7. At first glance one might expect that mutating the entries m i,a , m b,ī , m i,b , mā,ī, mā,a or m b,b could potentially perturb those entries of m which are updated later. The following theorem asserts that this is not so. Correctness follows from Corollary 7.1 which is stated below: b ď dq and m 1 is consistent. Then the following hold: Proof By Proposition 4.1 it follows m 1 " IncClosepm 1 , oq. The result then follows from Theorem 4.1.
The following theorem asserts that in-place update does not compromise correctness. It is telling that the correctness argument does not refer to the entries m i,a , m b,ī , m i,b , mā,ī, mā,a or m b,b at all. This is because the corollary on which the theorem is founded follows from the high-level property of idempotence. Notice too that the theorem is parameterised by the traversal order over m and therefore is independent of it.

In-place Incremental Strong Closure
The in-place version of the incremental strong closure algorithm is presented in Figure 13. The following lemma shows that running incremental closure followed by strengthening refines the entries in the DBM to their tightest possible value with respect to the new octagonal constraint. for i P t0, . . . , 2n´1u do 3: end for 5: for i P t0, . . . , 2n´1u do 6: for j P t0, . . . , 2n´1u do 7: if j ‰ī then 8: end if 10: end for 11: if m i,i ă 0 then 12: return f alse 13: end if 14: end for 15: return m 16: end function ,j q{2. Symmetric to the previous case.
,j . Analogous to the previous case.
,a " m 1ā ,a and because m 1 is consistent by Corollary 7.1 it follows ,a and 0 ď d`m 1 b,a . Therefore ,j q{2. Symmetric to the previous case.
,a and 0 ď d`m 1 b,a . Therefore ,j . Analogous to the previous case.
It therefore follows that m 3 " m 2 . Now suppose m 1 is not consistent. Hence m 2 is not consistent thus m 3 is not consistent.
[ \ Now we move onto the theorem showing the correctness of InplaceIncStrong-Close. We show that the in-place version of the algorithm produces the same DBM as the non-in place version of the algorithm. A bijective map used in the proof to process key entries first before processing non-key entries: the condition @0 ď i ă 2n.ρpi,īq ă 2n ensures this property. Note that this is the only caveat on the order produced by the map: the order in which key entries themselves are ordered is irrelevant and similarly for non-key entries. is a bijective map with @0 ď i ă 2n.ρpi,īq ă 2n, m 0 " m and Then either m 1 is consistent and Proof Suppose m 1 is consistent. Let k " 0. It vacuously follows that @0 ď ă k.m k ρ´1p q " m 2 ρ´1p q . Moreover @k ď ă 4n 2 .m k ρ´1p q " m ρ´1p q since m 0 " m. Suppose 0 ă k and ρpi, jq " k. Now suppose j "ī. Then Hence @0 ď ă k.m k ρ´1p q " m 2 ρ´1p q . Moreover @k`1 ď ă 4n 2 .m k`1 ρ´1p q " m ρ´1p q follows from the inductive hypothesis and the definition of m k`1 i,j . Now suppose that j ‰ī. Then 2n ă ρpi, jq and consider ,j {2, since ρpi,īq ă 2n ď ρpi, jq " k and ρp, jq ă ρpi, jq " k. By Hence it follows @0 ď ă k`1.m k`1 ρ´1p q " m 2 ρ´1p q . Note @k`1 ď ă 4n 2 .m k`1 ρ´1p q " m ρ´1p q follows from the inductive hypothesis and the definition of m k`1 i,j . Suppose m 1 is inconsistent hence m 1 i,i ă 0. Put k " ρpi, iq. But m k ď m and by Proposition 4.2 m 4n 2 i,i ă 0 as required.

In-place Incremental Tight Closure
The in-place version of the incremental tight closure algorithm is presented in Figure 13, the only difference with incremental strong closure is that for key entries for i P t0, . . . , 2n´1u do 3: if CheckZConsistent(m 1 ) then 6: for i P t0, . . . , 2n´1u do 7: for j P t0, . . . , 2n´1u do 8: if j ‰ī then 9: we also run a tightening step (line 3). As in the previous section, we have a helper lemma for the main theorem, showing that incremental closure followed by tightening and strengthening refines the entries in the DBM to the tightest value with respect to the new octagonal constraint. ,ā ď d. By Proposition 4.3 m 1 is coherent hence m 3 is closed by Lemma 5.1.
By Proposition 4.1 it follows that m˚" m 3 .

Experiments
To evaluate and compare algorithms we developed OCaml implementations of the non-incremental and incremental closure algorithms, both for strong and tight closure. With an eye towards retaining clarity of the code, we implemented and tuned in-place versions of the following strong closure algorithms: -NIC: Close followed by CheckConsistent and Str; -MIC: MinéIncClose followed by CheckConsistent and Str; -MICH: MinéIncClose followed by CheckConsistent and Str, but with loopinvariant code motion [19,Section 13.2] applied to hoist constant DBM expressions to minimise reads and writes to the DBM; -ICH: IncCloseHoist (with its own consistency check) followed by Str, with an additional check for rapidly detecting unsatisfiability using Corollary 4.1; -ISC: IncStrongClose (again which includes a consistency check) augmented with the rapid unsatisfiability check, again with hoisting. Key entries were calculated and looked-up naively, rather than specialised as suggested in Section 5.3, so as to preserve code clarity.
In addition, the above algorithms were extended with tightening to give: -NITC: Close, CheckConsistent, Tighten, CheckZConsistent and then Str, as outlined in the architecture diagram of Figure 3; -MICT: MinéIncClose followed by CheckConsistent, Tighten, CheckZConsistent and then Str;  For the experiments we randomly generated a feasible octagon with a specific number of variables and constraints, added a single randomly generated constraint, and then calculated strong closure or tight closure. Note that the resulting DBM may be infeasible, short-circuiting a call to Str. The DBM entries were IEEE 754 standard precision floats, using integer rounding to simulate integer entries for the tightening experiments. The resulting DBMs were then all checked for equality against Close (using a tolerance threshold to handle the floats). For each problem instance (number of variables and number of constraints), we repeated The results of the strong closure experiments are summarised in Figure 15 and Figure 16. The labels on the horizontal axis give the number of variables n and the number of constraints m for each experiment, abbreviated to n´m under to block of 5 columns. The vertical axis gives average time, in seconds, taken for each experiment. A log scale is used on the vertical so that the timings for the new incremental algorithms are discernible.
MIC is faster than NIC and thus the additional overhead of checking the guard at line 7 of Figure 5 does not negate the saving gained in the min operations. However the key difference between MIC and MICH is that the guard in MICH is decomposed into three separate checks to permit loop-invariant code motion. This suggests that the incremental algorithm of Minè is sensitive to how the check at line 7 of Figure 5 is realised, no doubt because it is applied Opn 3 q times. ICH is Opn 2 q and is uniformally faster than MICH. ISC is faster again, but the speedup is more modest since both ISC and ICH reside in Opn 2 q. However, even for small n, the speedup between MICH and ISC is at least one order of magnitude. Interestingly, all algorithms do not seem to very sensitive to m. The running time increases with m for small m, as the likelihood of writing a DBM entry increases as the DBM becomes more populated. However, once the DBM is densely populated, which happens when m is large, the running times stabilise, demonstrating that the key parameter is n rather than m.
The tight closure experiments are summarised in Figure 17 and Figure 18, mirroring the speedups reported for strong closure in Figure 15 and Figure 16 respectively. To summarise, the four figures present a consistent message: that our new incremental closure algorithms for both strong and tight closure are uniformly and significantly faster than the existing algorithms on all sizes of problem.

Concluding Discussion
The octagon domain is used for many applications due to its expressiveness and its easy of implementation, relative to other relational abstract domains. Yet the elegance of their domain operations is at odds with the subtlety of the underlying ideas, and the reasoning needed to justify refinements that appear to be straightforward, such as tightening and in-place update. This paper has presented novel algorithms to incrementally update an octagonal constraint system. More specifically, we have developed new incremental algorithms for closure, strong closure and integer closure, and their in-place variants. Experimental results with a prototype implementation demonstrate significant speedups over existing closure algorithms. We leave as future work the application of our incremental algorithms for modelling machine arithmetic [26] in binary analysis which, incidently, was the problem that motivated this study.