Decision Tree Learning in CEGIS-Based Termination Analysis

We present a novel decision tree-based synthesis algorithm of ranking functions for verifying program termination. Our algorithm is integrated into the workflow of CounterExample Guided Inductive Synthesis (CEGIS). CEGIS is an iterative learning model where, at each iteration, (1) a synthesizer synthesizes a candidate solution from the current examples, and (2) a validator accepts the candidate solution if it is correct, or rejects it providing counterexamples as part of the next examples. Our main novelty is in the design of a synthesizer: building on top of a usual decision tree learning algorithm, our algorithm detects cycles in a set of example transitions and uses them for refining decision trees. We have implemented the proposed method and obtained promising experimental results on existing benchmark sets of (non-)termination verification problems that require synthesis of piecewise-defined lexicographic affine ranking functions.


Introduction
Termination Verification by Ranking Functions and CEGIS Termination verification is a fundamental but challenging problem in program analysis.Termination verification usually involves some well-foundedness arguments.Among them are those methods which synthesize ranking functions [16]: a ranking function assigns a natural number (or an ordinal, more generally) to each program state, in such a way that the assigned values strictly decrease along transition.Existence of such a ranking function witnesses termination, where well-foundedness of the set of natural numbers (or ordinals) is crucially used.
We study synthesis of ranking functions by CounterExample Guided Inductive Synthesis (CEGIS) [28].CEGIS is an iterative learning model in which a synthesizer and a validator interact to find solutions for given constraints.At each iteration, (1) a synthesizer tries to find a candidate solution from the current examples, and (2) a validator accepts the candidate solution if it is correct, or rejects it providing counterexamples.These counterexamples are then used as part of the next examples (Fig. 1).

No solution
Fig. 1: the CEGIS architecture CEGIS has been applied not only to program verification tasks (synthesis of inductive invariants [17,18,24,25], that of ranking functions [19], etc.) but also to constraint solving (for CHC [12,14,27,35], for pwCSP(T ) [29,30], etc.).The success of CEGIS is attributed to the degree of freedom that synthesizers enjoy.In CEGIS, synthesizers receive a set of individual examples that synthesizers can use in various creative and speculative manners (such as machine learning).In contrast, in other methods such as [5-8, 23, 26], synthesizers receive logical constraints that are much more binding.

Segmented Synthesis in CEGIS-Based Termination Analysis
The choice of a candidate space for candidate solutions σ is important in CEGIS.A candidate space should be expressive: by limiting a candidate space, the CEGIS architecture may miss a genuine solution.At the same time, complexity should be low: a larger candidate space tends to be more expensive for synthesizers to handle.
This tradeoff is also in the choice of the type of examples: using an expressive example type, a small number of examples can prune a large portion of the candidate space; however, finding such expressive examples tends to be expensive.
In this paper, we use piecewise affine functions as our candidate space for ranking functions.Piecewise affine functions are functions of the form . . .
where {L 1 , . . ., L n } is a partition of the domain of f ( x) such that each L i is a polyhedron (i.e. a conjunction of linear inequalities).We say segmented synthesis to emphasize that our synthesis targets are piecewise affine functions with case distinction.Piecewise affine functions stand on a good balance between expressiveness and complexity: the tasks of synthesizers and validators can be reduced to linear programming (LP); at the same time, case distinction allows them to model a variety of situations, especially where there are discontinuities in the function values and/or derivatives.We use transition examples as our example type (Table 1).Transition examples are pairs of program states that represent transitions; they are much cheaper to handle compared to trace examples (finite traces of executions until termination) used e.g. in [15,32].The current work is the first to pursue segmented synthesis of ranking functions with transition examples; see Table 1.[15,32] [19] piecewise affine ranking functions [15,32] our method In fact, decision tree learning in the CEGIS architecture has already been actively pursued, for the synthesis of invariants as opposed to ranking functions [12,14,18,22,35].It is therefore a natural idea to adapt the decision tree learning algorithms used there, from invariants to ranking functions.However, we find that a naive adaptation of those algorithms for invariants does not suffice: they are good at handling state examples that appear in CEGIS for invariants; but they are not good at handling transition examples.
More specifically, when decision tree learning is applied to invariant synthesis (Fig. 2a), examples are given in the form of program states labeled as positive or negative.Decision trees are then built by iteratively selecting the best halfspaces-where "best" is in terms of some quality measures-until each leaf contains examples with the same label.One common quality measure used here is an information-theoretic notion of information gain.
We extend this from invariant synthesis to ranking function synthesis where examples are given by transitions instead of states (Fig. 2b).In this case, a major challenge is to cope with examples that cross a border of the current segmentation-such as the transition e 4 crossing the border h 1 in Fig. 2b.Our decision tree learning algorithm should handle such crossing examples, taking into account the constraints imposed on the leaf labels affected by those examples (the affected leaf labels are f 1 ( x) and f 3 ( x) in the case of e 4 ).

Our Algorithm: Cycle-Based Decision Tree Learning for Transition Examples
We use what we call the cycle detection theorem (Theorem 17) as a theoretical tool to handle such crossing examples.The theorem claims the following: if there is no piecewise affine ranking function with the current segmentation of the domain (such as the one in Fig. 2b given by h 1 and h 2 ), then this must be caused by a certain type of cycle of constraints, which we call an implicit cycle.
In our decision tree learning algorithm, when we do not find a piecewise affine ranking function with the current segmentation, we find an implicit cycle and refine the segmentation to break the cycle.Once all the implicit cycles are gone, the cycle detection theorem guarantees the existence of a candidate piecewise affine ranking function with the segmentation.
We integrate this decision tree learning algorithm in the CEGIS architecture (Fig. 1) and use it as a synthesizer.Our implementation of this framework gives promising experimental results on existing benchmark sets.
Contribution Our contribution is summarized as follows.
-We provide a decision tree-based synthesizer for ranking functions integrated into the CEGIS architecture.Our synthesizer uses transition examples to find candidate piecewise affine ranking functions.A major challenge here, namely handling constraints arising from crossing examples, is coped with by our theoretical observation of the cycle detection theorem.-We implement our synthesizer for ranking functions implemented in Mu-Val and report the experience of using MuVal for termination and nontermination analysis.The experiment results show that MuVal's performance is comparable to state-of-the-art termination analyzers [7,10,13,21] from Termination Competition 2020, and that MuVal can prove (non-)termination of some benchmarks with which other analyzers struggle. Organization.

Preview by Examples
We present a preview of our method using concrete examples.We start with an overview of the general CEGIS architecture, after which we proceed to our main contribution, namely a decision tree learning algorithm for transition examples.

Termination Verification by CEGIS
Our method follows the usual workflow of termination verification by CEGIS.It works as follows: given a program, we encode the termination problem into a constraint solving problem, and then use the CEGIS architecture to solve the constraint solving problem.
Encoding the termination problem.The first step of our method is to encode the termination problem as the set C of constraints.
Example 1.As a running example, consider the following C program.
The termination problem is encoded as the following constraints.
Here, R is a predicate variable representing a well-founded relation, and term variables x, x are universally quantified implicitly.
The set C of constraints claims that the transition relation for the given program is subsumed by a well-founded relation.So, verifying termination is now rephrased as the existence of a solution for C. Note that we omitted constraints for invariants for simplicity in this example (see Sect. 3 for the full encoding).In the CEGIS architecture, a synthesizer and a validator iteratively exchange a set E of examples and a candidate solution R(x, x ) for C. At the moment, we present a rough sketch of CEGIS, leaving the details of our implementation to Sect.2.2.
Example 2. Fig. 3 shows how the CEGIS architecture solves the set C of constraints shown in (2) and (3).Fig. 3 consists of three pairs of interactions (i)-(vi) between a synthesizer and a validator.
(i) The synthesizer takes E = ∅ as a set of examples and returns a candidate solution R(x, x ) = ⊥ synthesized from E. In general, candidate solutions are required to satisfy all constraints in E, but the requirement is vacuously true in this case.
(ii) The validator receives the candidate solution and finds out that the candidate solution is not a genuine solution.The validator finds that the assignment x = 1, x = 0 is a counterexample for (3), and thus adds R(1, 0) to E to prevent the same candidate solution in the next iteration.

Handling Cycles in Decision Tree Learning
y ≥ 0?
Fig. 4: An example of a decision tree that represents a piecewise affine ranking function f (x, y) We explain the importance of handling cycles in our decision tree-based synthesizer of piecewise affine ranking functions.
In what follows, we deal with such decision trees as shown in Fig. 4: their internal nodes have affine inequalities (i.e.halfspaces); their leaves have affine functions; and overall, such a decision tree expresses a piecewise affine function (Fig. 4).When we remove leaf labels from such a decision tree, then we obtain a template of piecewise functions where condition guards are given but function bodies are not.We shall call the latter a segmentation.
Input and output of our synthesizer.The input of our synthesizer is a set where x is a sequence of variables and f ( x) is a piecewise affine function, which is represented by a decision tree (Fig. 4).Therefore our synthesizer aims at learning a suitable decision tree.
Refining segmentations and handling cycles.Roughly speaking, our synthesizer learns decision trees in the following steps.1. Generate a set H of halfspaces from the given set E of examples.This H serves as the vocabulary for internal nodes.Set the initial segmentation to be the one-node tree (i.e. the trivial segmentation).2. Try to synthesize a piecewise affine ranking function f for E with the current segmentation-that is, try to find suitable leaf labels.If found, then use this Otherwise, refine the current segmentation with some halfspace in H, and go to Step 2.
The key step of our synthesizer is Step 3. We show a few examples.
Example 3. Suppose we are given E = {R(1, 0), R(−2, −1)} as a set of examples.Our synthesizer proceeds as follows: (1) Our synthesizer generates the set How can we decide which halfspace in H "looks good"?We use quality measure that is a value representing the quality of each halfspace and select the halfspace with the maximum quality measure.Fig. 5 shows the comparison of the quality of x ≥ 0 and x ≥ −2 in this example.Intuitively, x ≥ 0 is better than x ≥ −2 because we can obtain a simple ranking function if x ≥ 0 then x else − x with x ≥ 0 (Fig. 5a) while we need further refinement of the segmentation with x ≥ −2 (Fig. 5b).In Sect.5, we introduce a quality measure for halfspaces following this intuition.
Our synthesizer iteratively refines segmentations following this quality measure, until examples contained in each leaf of the decision tree admit an affine ranking function.This approach is inspired by the use of information gain in the decision tree learning for invariant synthesis.
Example 3 showed a natural extension of a decision tree learning method for invariant synthesis.However, this is not enough for transition examples, for the reasons of explicit and implicit cycles.Here are their examples.
(1) Our synthesizer generates the set (2) Our synthesizer tries to find a ranking function of the form f (x) = ax + b (with the trivial segmentation), but there is no such.
(3) Our synthesizer refines the current segmentation with (x ≥ 1) ∈ H because x ≥ 1 "looks good" (i.e. is the best with respect to a quality measure).
We have reached the point where the naive extension of decision tree learning explained in Example 3 no longer works: although all constraints contained in each leaf of the decision tree admit an affine ranking function, there is no piecewise affine ranking function for E of the form f More specifically, in this example, the leaf representing x ≥ 1 contains R(2, 3), and the other leaf representing ¬(x ≥ 1) contains R(−1, −2).The example R(2, 3) admits an affine ranking function f 1 (x) = −x + 2, and R(−1, −2) admits f 2 (x) = x + 1, respectively.However, the combination f (x) = if x ≥ 1 then f 1 (x) else f 2 (x) is not a ranking function for E.Moreover, there is no ranking function for E of the form f It is clear that this failure is caused by the crossing examples R(−1, 1) and R(1, 0).It is not that every crossing example is harmful.However, in this case, the set {R(−1, 1), R(1, 0)} forms a cycle between the leaf for x ≥ 1 and the leaf for ¬(x ≥ 1) (see Fig. 6).This "cycle" among leaves-in contrast to explicit cycles such as {R(1, 0), R(0, 1)} in Example 4-is called an implicit cycle.
Once an implicit cycle is found, our synthesizer cuts it by refining the current segmentation.Our synthesizer continues the above steps (1-3) of decision tree learning as follows.(4) Our synthesizer selects (x ≥ 0) ∈ H and cuts the implicit cycle {R(−1, 1), R(1, 0)} by refining segmentations.( 5) Using the refined segmentation, our synthesizer obtains f (x) = if x ≥ 1 then − x + 2 else if x ≥ 0 then 0 else x + 3 as a ranking function for E.
As explained in Example 4,5, handling (explicit and implicit) cycles is crucial in decision tree learning for transition examples.Moreover, our cycle detection theorem (Theorem 17) claims that if there is no explicit or implicit cycle, then one can find a ranking function for E without further refinement of segmentations.

(Non-)Termination Verification as Constraint Solving
We explain how to encode (non-)termination verification to constraint solving.
Following [30], we formalize our target class pwCSP of predicate constraint satisfaction problems parametrized by a first-order theory T .Definition 6.Given a formula φ, let ftv (φ) be the set of free term variables and fpv (φ) be the set of free predicate variables in φ.Definition 7. A pwCSP is defined as a pair (C, R) where C is a finite set of clauses of the form and R ⊆ fpv (C) is a set of predicate variables that are required to denote wellfounded relations.Here, 0 ≤ ≤ m.Meta-variables t and φ range over T -terms and T -formulas, respectively, such that ftv (φ) = ∅.Meta-variables x and X range over term and predicate variables, respectively.
A pwCSP (C, R) is called CHCs (constrained Horn clauses, [9]) if R = ∅ and ≤ 1 for all clauses c ∈ C. The class of CHCs has been widely studied in the verification community [12,14,27,35].Definition 8.A predicate substitution σ is a finite map from predicate variables X to closed predicates of the form λx 1 , . . ., x ar(X) .φ.We write dom(σ) for the domain of σ and σ(C) for the application of σ to C. Definition 9. A predicate substitution σ is a (genuine) solution for (C, R) if (1) fpv (C) ⊆ dom(σ); (2) |= σ(C) holds; and (3) for all X ∈ R, σ(X) represents a well-founded relation, that is, sort(σ(X)) = ( s, s) → • for some sequence s of sorts and there is no infinite sequence v 1 , v 2 , . . . of sequences v i of values of the sorts s such that |= ρ(X)( v i , v i+1 ) for all i ≥ 1.
Encoding termination.Given a set of initial state ι( x) and a transition relation τ ( x, x ), the termination verification problem is expressed by the pwCSP (C, R) where R = {R}, and C consists of the following clauses.
We use φ =⇒ ψ as syntax sugar for ¬φ ∨ ψ, so this is a pwCSP.The wellfounded relation R asserts that τ is terminating.We also consider an invariant I for τ to avoid synthesizing ranking functions on unreachable program states.
Encoding non-termination.We can also encode a problem of non-termination verification to pwCSP via recurrent sets [20].For simplicity, we explain the encoding for the case of only one program variable x.We consider a recurrent set R satisfying the following conditions.
To remove ∃ from ( 6), we use the following constraint that is equivalent to (6).
The intuition is as follows.Given x in the recurrent set R, the relation E(x, x ) searches for the value of ∃x in (6).The search starts from x = 0 in (7), and x is nondeterministically incremented or decremented in (8).The well-founded relation S asserts that the search finishes within finite steps.As a result, we obtain a pwCSP for non-termination defined by (C, R) where R = {S} and C is given by ( 5), (7), and (the disjunctive normal form of) (8).

CounterExample-Guided Inductive Synthesis (CEGIS)
We explain how CounterExample-Guided Inductive Synthesis [28] (CEGIS for short) works for a given pwCSP (C, R) following [30].Then, we add the extraction of positive/negative examples to the CEGIS architecture, which enables our decision tree-based synthesizer to use a simplified form of examples.CEGIS proceeds through the iterative interaction between a synthesizer and a validator (Fig. 1), in which they exchange examples and candidate solutions.
Synthesizer.The input for a synthesizer is a set E of examples of C collected from previous CEGIS iterations.The synthesizer tries to find a candidate solution σ consistent with E instead of a genuine solution for (C, R).If the candidate solution σ is found, then σ is passed to the validator.If E is unsatisfiable, then E witnesses unsatisfiability of (C, R).Details of our synthesizer is described in Sect. 5.
Validator.A validator checks whether the candidate solution σ from the synthesizer is a genuine solution of (C, R) by using SMT solvers.That is, satisfiability To prevent this counterexample from being found in the next CEGIS iteration again, the validator adds the following example to E.
The CEGIS architecture repeats this interaction between the synthesizer and the validator until a genuine solution for (C, R) is found or E witnesses unsatisfiability of (C, R).

Extraction of positive/negative examples.
Examples obtained in the above explanation are a bit complex to handle in our decision tree-based synthesizer: each example in E is a disjunction (9) of literals, which may contain multiple predicate variables.
To simplify the form of examples, we extract from E the sets E + X and E − X of positive examples (i.e., examples of the form X( v)) and negative examples (i.e., examples of the form ¬X( v)) for each X ∈ fpv (E).This allows us to synthesize a predicate σ(X) for each predicate variable X ∈ fpv (E) separately.For simplicity, we write The extraction is done as follows.We first substitute for each predicate variable application X( v) in E a boolean variable b X( v) to obtain a SAT problem SAT(E).Then, we use SAT solvers to obtain an assignment η that is a solution for SAT(E).If a solution η exists, then we construct positive/negative examples from η; otherwise, E is unsatisfiable.Definition 12. Let η be a solution for SAT(E).For each predicate variable X ∈ fpv (E), we define the set E + X of positive examples and the set E + X of negative examples under the assignment η by E + Note that some of predicate variable applications X( v) may not be assigned true nor false because they do not affect the evaluation of SAT(E).Such predicate variable applications are discarded from {(E + X , E − X )} X∈fpv (E) .Our method uses the extraction of positive and negative examples when the validator passes examples to the synthesizer.If X ∈ fpv (E) ∩ R, then we apply our ranking function synthesizer to (E + X , E − X ).If X ∈ fpv (E) \ R, then we apply an invariant synthesizer.
We say a candidate solution σ is consistent with Note that unsatisfiability of depends on the choice of the assignment η.Therefore, the CEGIS architecture need to be modified: if synthesizers find unsatisfiability of {(E + X , E − X )} X∈fpv (E) , then we add the negation of an unsatisfiability core to E to prevent using the same assignment η again.
Note that some restricted forms of ( 9) have also been considered in previous work and are called implication examples in [17] and implication/negation constraints in [12].Our extraction of positive and negative examples is applicable to the general form of (9).

Ranking Function Synthesis
In this section, we describe one of the main contributions, that is, our decision tree-based synthesizer, which synthesizes a candidate well-founded relation σ(R) from a finite set E + R of examples.We assume that only positive examples are given because well-founded relations occur only positively in pwCSP for termination analysis (see Sect. 3).The aim of our synthesizer is to find a piecewise affine lexicographic ranking function f ( x) for the given set E + R of examples.Below, we fix a predicate variable R ∈ R and omit the subscript E + R = E + .

Basic Definitions
To represent piecewise affine lexicographic ranking functions, we use decision trees like the one in Figure 4. Let x = (x 1 , . . ., x n ) be the program variables where each x i ranges over Z.
Definition 13.A decision tree D is defined by D := g( x) | if h( x) ≥ 0 then D else D where g( x) = (g k ( x), . . ., g 0 ( x)) is a tuple of affine functions and h( x) is an affine function.A segmentation tree S is defined as a decision tree with undefined leaves ⊥: that is, S := ⊥ | if h( x) ≥ 0 then S else S. For each decision tree D, we can canonically assign a segmentation tree by replacing the label of each leaf with ⊥.This is denoted by S(D).For each decision tree D, we denote the corresponding piecewise affine function by f D ( x) : Each leaf in a segmentation tree S corresponds to a polyhedron.We often identify the segmentation tree S with the set of leaves of S and a leaf with the polyhedron corresponding to the leaf.For example, we say something like "for each L ∈ S, v ∈ L is a point in the polyhedron L".Suppose we are given a segmentation tree S and a set E + of examples.
Definition 14.For each L 1 , L 2 ∈ S, we denote the set of example transitions from Definition 15.We define the dependency graph G(S, E + ) for S and E + by the graph (V, E) where vertices V = S are leaves, and edges We denote the set of start points v and end points

Segmentation and (Explicit and Implicit) Cycles:
One-Dimensional Case For simplicity, we first consider the case where If our ranking function synthesizer finds such a ranking function f ( x), then a candidate well-founded relation Our synthesizer builds a decision tree D to find a ranking function f D ( x) for E + .The main question in doing so is "when and how should we refine partitions of decision trees?"To answer this question, we consider the case where there is no ranking function f D ( x) for E + with a fixed segmentation S, and classify reasons for this into three cases as follows.
Case 1: explicit cycles in examples.We define an explicit cycle in E + as a cycle in the graph (Z n , E + ).An explicit cycle witnesses that there is no ranking function for E + (see e.g., Example 4).Case 3: implicit cycles in the dependency graph.We define an implicit cycle by a cycle in the dependency graph G(S, E + ).Case 3 is the case where an implicit cycle prohibits the existence of piecewise affine ranking functions for E + with the segmentation S (e.g., Example 5).If Case 1 and Case 2 do not hold but no piecewise affine ranking function for E + with the segmentation S exists, then there must be an implicit cycle by (the contraposition of) the following proposition.
Proposition 16.Assume E + is a set of examples that does not contain explicit cycles (i.e.Case 1 does not hold).Let S be a segmentation tree and assume that for each L ∈ S, there exists an affine ranking function f L ( x) for E + L,L (i.e.Case 2 does not hold).If the dependency graph G(S, E + ) is acyclic, then there exists a decision tree D with the segmentation S(D) = S such that f D ( x) is a ranking function for E + .
Proof.By induction on the height (i.e. the length of a longest path from a vertex) of vertices in G(S, E + ).We construct a decision tree D as follows.If the height of L ∈ S is 0, then we assign for each cell L with the height less than n.
Note that the converse of Proposition 16 does not hold: the existence of implicit cycles in G(S, E + ) does not necessarily imply that no piecewise affine ranking function exists with the segmentation S.

Segmentation and (Explicit and Implicit) Cycles: Multi-Dimensional Lexicographic Case
We consider a more general case where f ( x) = (f k ( x), . . ., f 0 ( x)) is a multidimensional lexicographic ranking function and k is a fixed nonnegative integer.Given a function f ( x), we consider the well-founded relation R f ( x, x ) defined inductively as follows.
Our aim here is to find a lexicographic ranking function f ( x) for E + , i.e. a function f ( x) such that R f ( v, v ) holds for each ( v, v ) ∈ E + .Our synthesizer does so by building a decision tree.The same argument as the one-dimensional case holds for lexicographic ranking functions.

Theorem 17 (cycle detection)
. Assume E + is a set of examples that does not contain explicit cycles.Let S be a segmentation tree and assume that for each L ∈ S, there exists an affine function ).If the dependency graph G(S, E + ) is acyclic, then there exists a decision tree D with the segmentation Proof.The proof is almost the same as Proposition 16.Here, note that if f ( x) = f ( x) + c where c is a tuple of nonnegative integer constants, then R f ( x, x ) subsumes R f ( x, x ).
Algorithm 1 Building decision trees.
return unsatisfiable 3: end if 4: D := ResolveCase2(E) 5: while true do 6: C := GetConstraints(D, E) 7: O := SumAbsParams(D) 8: if ρ is defined then 10: return R f 12: else 13: get an unsat core in C 14: find an implicit cycle ( v1, v 1 ), . . ., ( v l , v l ) in the unsat core 15: find a cell C and two distinct points v i , vi+1 ∈ C in the implicit cycle 16: add a halfspace to separate v i and vi+1 and update D 17: end if 18: end while

Our Decision Tree Learning Algorithm
We design a concrete algorithm based on Theorem 17.It is shown in Algorithm 1 and consists of three phases.We shall describe the three phases one by one.
Phase 1 Phase 1 (Line 1-3) detects explicit cycles in E + to exclude Case 1.Here, we use a cycle detection algorithm for directed graphs.
Phase 2 Phase 2 (Line 4) detects and resolves Case 2 by using ResolveCase2 (Algorithm 2), which is a function that grows a decision tree recursively.Re-solveCase2 takes non-crossing examples in a leaf, divides the leaf, and returns a template tree that is fine enough to avoid Case 2. Here, template trees are decision trees whose leaves are labeled by affine templates.
Algorithm 2 shows the detail of ResolveCase2.ResolveCase2 builds a template tree recursively starting from the trivial segmentation S = ⊥ and all given examples.In each polyhedron, ResolveCase2 checks whether the set C of constraints imposed by non-crossing examples can be satisfied by an affine lexicographic ranking function on the polyhedron (Line 2-3).If the set C of constraints is not satisfiable, then ResolveCase2 chooses a halfspace h( x) ≥ 0 (Line 6) and divides the current polyhedron by the halfspace.
There is a certain amount of freedom in the choice of halfspaces.To guarantee termination of the whole algorithm, we require that the chosen halfspace h separates at least one point in if ρ is undefined then 6: h := ChooseQualifier(E + ) 7: return f 12: end if 13: end function 14: function GetConstraints(D, E + ) 15: return where fD is the tuple of piecewise affine functions corresponding to D 16: end function Algorithm 3 A criterion for eager qualifier selection.We explain two strategies (eager and lazy) to choose halfspaces that can be used to implement ChooseQualifier.Both of them are guaranteed to terminate, and moreover, intended to yield simple decision trees.
Eager strategy.In the eager strategy, we eagerly generate a finite set H of halfspaces from the set E + of all examples beforehand and choose the best one from H with respect to a certain quality measure.To satisfy Assumption 18, H are generated so that any two points u, v ∈ E + can be separated by some halfspace (h( x) ≥ 0) ∈ H.
For example, we can use intervals

ResolveCase2, intervals and octagons satisfy
so Assumption 18 is satisfied by choosing the best halfspace with respect to the quality measure from H .
For each halfspace (h( x) ≥ 0) ∈ H , we calculate QualityMeasure in Algorithm 3, and choose one that maximizes QualityMeasure(h, E + ).Quali-tyMeasure(h, E + ) calculates the sum of the maximum number of satisfiable constraints in each leaf divided by h( x) ≥ 0 plus an additional term Lazy strategy.In the lazy strategy, we lazily generate halfspaces.We divide the current polyhedron so that non-crossing examples in the cell point to almost the same direction.
First, we label states that occur in E + C,C as follows.We find a direction that most examples in C point to by solving the MAX-SMT a : Then we apply weighted C-SVM to generate a hyperplane that separates most of the positive and negative points.To guarantee termination of Algorithm 1, we avoid "useless" hyperplanes that classify all the points by the same label.If we obtain such a useless hyperplane, then we undersample a majority class and apply C-SVM again.By undersampling suitably, we eventually get linearly separable data with at least one positive point and one negative point.
Note that since coefficients of hyperplanes extracted from C-SVM are floating point numbers, we have to approximate them by hyperplanes with rational coefficients.This is done by truncating continued fraction expansions of coefficients by a suitable length.
Phase 3 In Line 5-18 of Algorithm 1, we further refine the segmentation S(D) to resolve Case 3. Once Case 2 is resolved by ResolveCase2, Case 2 never holds even after refining S(D) further.This enables to separate Phases 2 and 3.
Given a template tree D, we consider the set C of constraints on parameters in D that claims f D ( x) is a ranking function for E + (Line 6).
If C is satisfiable, we use an SMT solver to obtain a solution of C (i.e. an assignment ρ of integers to parameters) while minimizing the sum of absolute values of unknown parameters in D at the same time (Line 8).This minimization is intended to give a simple candidate ranking function.The solution ρ is used to instantiate the template tree D (Line 11).
If C cannot be satisfied, there must be an implicit cycle in the dependency graph G(S(D), E + ) by Theorem 17.The implicit cycle can be found in an unsatisfiable core of C. We refine the segmentation of D to cut the implicit cycle in Line 16.To guarantee termination, we choose a halfspace satisfying the following assumption, which is similar to Assumption 18.
Assumption 19.If halfspace h( x) ≥ 0 is chosen in Line 16 of Algorithm 1, then there exist v, u ∈ E + such that h( v) ≥ 0 and h( u) < 0.
We have two strategy (eager and lazy) to refine the segmentation of D.
In eager strategy, we choose a halfspace (h( x) ≥ 0) ∈ H that separates two distinct points v i and v i+1 in the implicit cycle.In doing so, we want to reduce the number of implicit cycles in G(S(D), E + ), but adding a new halfspace may introduce new implicit cycles if there exists ( v, v ) ∈ E + C,C that crosses the new border from the side of v i to the side of v i+1 .Therefore, we choose a hyperplane that minimizes the number of new crossing examples.
In lazy strategy, we use an SMT solver to find a hyperplane h( x) ∈ H that separates v i and v i+1 and minimizes the number of new crossing examples.
Termination Assumption 18 and Assumption 19 guarantees that every leaf in S(D) contains at least one point in the finite set E + .Because the number of leaves in S(D) strictly increases after each iteration of Phase 2 and Phase 3, we eventually get a segmentation S(D) where each L ∈ S(D) contains only one point in E + in the worst case.Since we have excluded Case 1 at the beginning, Theorem 17 guarantees the existence of ranking function with the segmentation S(D).Therefore, the algorithm terminates within |E + | times of refinement.
Theorem 20.If Assumption 18 and Assumption 19 hold, then Algorithm 1 terminates.If Algorithm 1 returns a piecewise affine lexicographic function f ( x), then the function satisfies R f ( x, x ) for each ( x, x ) ∈ E + where E + is the input of the algorithm.

Improvement by Degenerating Negative Values
There is another way to define well-founded relation from the tuple f ( x) = (f k ( x), . . ., f 0 ( x)) of functions, that is, the well-founded relation R f ( x, x ) defined inductively by R () ( x, x ) := ⊥ and R In this definition, we loosen the equality f i ( x) = f i ( x ) (where i = 1, . . ., k) of the usual lexicographic ordering (10) . This means that once f i ( x) becomes negative, f i ( x) must stay negative but the value do not have to be the same, which is useful for the synthesizer to avoid complex candidate lexicographic ranking functions and thus improves the performance.
However, if we use this well-founded relation R f ( x, x ) instead of R f ( x, x ) in (10), then Theorem 17 fails because R f ( x, x ) is not necessarily subsumed by R f + c where c = (c k , . . ., c 0 ) is a nonnegative constant (see the proof of Proposition 16 and Theorem 17).As a result, there is a chance that no implicit cycle can be found in line 14 of Algorithm 1.Therefore, when we use R f ( x, x ), we modify Algorithm 1 so that if no implicit cycle can be found in line 14, then we fall back on the former definition of R f ( x, x ) and restart Algorithm 1.

Implementation and Evaluation
Implementation.We implemented a constraint solver MuVal that supports invariant synthesis and ranking function synthesis.For invariant synthesis, we apply an ordinary decision tree learning (see [12,14,18,22,35] for existing techniques).For ranking function synthesis, we implemented the algorithm in Sect. 5 with both eager and lazy strategies for halfspace selection.Our synthesizer uses well-founded relation explained in Sect.5.5.Given a benchmark, we run our solver for both termination and non-termination verification in parallel, and when one of the two returns an answer, we stop the other and use the answer.MuVal is written in OCaml and uses Z3 as an SMT solver backend.We used clang and llvm2kittel [1] to convert C benchmarks to T2 [3] format files, which are then translated to pwCSP by MuVal.
MuVal was able to solve more benchmarks than Ultimate Automizer.Compared to iRankFinder, MuVal solved slightly fewer benchmarks, but was faster in a large number of benchmarks: 265 benchmarks were solved faster by MuVal, 68 by iRankFinder, and 2 were not solved by both tools within 300 seconds (here, we regard U (unknown) as 300 seconds).Compared to AProVE, MuVal solved fewer benchmarks.However, there are several benchmarks that MuVal could solve but AProVE could not.Among them is "TelAviv-Amir-Minimum true-termination.c",which does require piecewise affine ranking functions.MuVal found a ranking function f (x, y) = if x − y ≥ 0 then y else x, while AProVE timed out.
We also observed that using CEGIS with transition examples itself showed its strengths even for benchmarks that do not require piecewise affine ranking functions.Notably, there are three benchmarks that MuVal could solve but the other tools could not; they are examples that do not require segmentations.Further analysis of these benchmarks indicates the following strengths of our framework: (1) the ability to handle nonlinear constraints (to some extent) thanks to the example-based synthesis and the recent development of SMT solvers; and (2) the ability to find a long lasso-shaped non-terminating trace assembled from multiple transition examples.See Appendix A for details.

Related Work
There are a bunch of works that synthesize ranking functions via constraint solving.Among them is a counterexample-guided method like CEGIS [28].CEGIS is sound but not guaranteed to be complete in general: even if a given constraint has a solution, CEGIS may fail to find the solution.A complete method for ranking function synthesis is proposed in [19].They collect only extremal counterexamples instead of arbitrary transition examples to avoid infinitely many examples.A limitation of their method is that the search space is limited to (lexicographic) affine ranking functions.Another counterexample-guided method is proposed in [32] and implemented in SeaHorn.This method can synthesize piecewise affine functions, but their approach is quite different from ours.Given a program, they construct a safety property that the number of loop iterations does not exceed the value of a candidate ranking function.The safety property is checked by a verifier.If it is violated, then a trace is obtained as a counterexample and the candidate ranking function is updated by the counterexample.The main difference from our method is that their method uses trace examples while our method uses transition examples (which is less expensive to handle).FreqTerm [15] also uses the connection to safety property, but they exploit syntax-guided synthesis for synthesizing ranking functions.
However, when we apply this technique to piecewise affine ranking functions, we get nonlinear constraints [23].
Abstract interpretation is also applied to segmented synthesis of ranking functions and implemented in FuncTion [31,33,34].In this series of work, decision tree representation of ranking functions is used in [34] for better handling of disjunctions.Compared to their work, we believe that our method is more easily extensible to other theories than linear integer arithmetic as long as the theories are supported by SMT solvers (although such extensions are out of the scope of this paper).
Other state-of-the-art termination verifiers include the following.Ultimate Automizer [21] is an automata-based method.It repeatedly finds a trace and computes a termination argument that contains the trace until termination arguments cover the set of all traces.Büchi automata are used to handle such traces.AProVE [10,13] is based on term rewriting systems.

Conclusions and Future Work
In this paper, we proposed a novel decision tree-based synthesizer for ranking functions, which is integrated into the CEGIS architecture.The key observation here was that we need to cope with explicit and implicit cycles contained in given examples.We designed a decision tree learning algorithm using the theoretical observation of the cycle detection theorem.We implemented the framework and observed that its performance is comparable to state-of-the-art termination analyzers.In particular, it solved three benchmarks that no other tool solved, a result that demonstrates the potential of the current combination of CEGIS, segmented synthesis, and transition examples.
We plan to extend our ranking function synthesizer to a synthesizer of piecewise affine ranking supermartingales.Ranking supermartingales [11] are probabilistic version of ranking functions and used for verification of almost-sure termination of probabilistic programs.
We also plan to implement a mechanism to automatically select a suitable set of halfspaces with which decision trees are built.In our ranking function synthesizer, intervals/octagons/octahedron/polyhedra can be used as the set of halfspaces.However, selecting an overly expressive set of halfspaces may cause the problem of overfitting [24] and result in poor performance.Therefore, applying heuristics that adjusts the expressiveness of halfspaces based on the current examples may improve the performance of our tool.

Fig. 3 :
Fig. 3: An example of CEGIS iterations (iii) The synthesizer receives the updated set E = {R(1, 0)} of examples, finds a ranking function f (x) = x for E (i.e. for the transition from x = 1 to x = 0), and returns a candidate solution R(x, x ) = x > x ∧ x ≥ 0. (iv) The validator checks the candidate solution, finds a counterexample x = −2, x = −1 for (2), and adds R(−2, −1) to E. (v) The synthesizer finds a ranking function f (x) = |x| for E and returns R(x, x ) = |x| > |x | ∧ |x| ≥ 0 as a candidate solution.Note that the synthesizer have to synthesize a piecewise affine function here, but details are deferred to Sect.2.2.(vi) The validator accepts the candidate solution because it is a genuine solution for C.

Fig. 5 :
Fig. 5: Selecting halfspaces.Transition examples are shown by red arrows.Boundaries of halfspaces are shown by dashed lines.
Our synthesizer tries to find a ranking function of the form f (x) = ax + b (with the trivial segmentation), but there is no such ranking function.(3) Our synthesizer refines the current segmentation with (x ≥ 0) ∈ H because x ≥ 0 "looks good".(4) Our synthesizer tries to find a ranking function of the form f (x) = if x ≥ 0 then ax + b else cx + d, using the current segmentation.Our synthesizer obtains f (x) = if x ≥ 0 then x else − x and use this f (x) for a candidate solution.

Example 10 .
Consider the following C program.while ( x > 0) { x = -2 * x + 9; } The non-termination problem is encoded as the pwCSP (C, R) where R = {S}, and C consists of then σ is a genuine solution of the original pwCSP (C, R), so the validator accepts this.Otherwise, the validator adds new examples to the set E of examples.Finally, the synthesizer is invoked again with the updated set E of examples.If |= ¬ σ(C) is satisfiable, new examples are constructed as follows.Using SMT solvers, the validator obtains an assignment θ to term variables such that |= ¬θ(ψ) holds for some ψ ∈ σ(C).By (4), |= ¬θ(ψ) is a clause of the form |= ¬θ

Case 2 :
non-crossing examples are unsatisfiable.The second case is when there is a leaf L ∈ S such that no affine (not piecewise affine) ranking function for the set E + L,L of non-crossing examples exists.This prohibits the existence of piecewise affine function f D ( x) for E + with segmentation S = S(D) because the restriction of f D ( x) to L ∈ S must be an affine ranking function for E + L,L .

Fig. 7 :
Fig.7: Scatter plots of runtime.Ultimate Automizer and AProVE sometimes gave up before the time limit, and such cases are regarded as 300s.

Table 1 :
ranking function synthesis by CEGIS

Table 2 :
Numbers of solved benchmarks