Keywords

figure a

1 Introduction

Checking satisfiability of quantified formulae modulo the theory of linear (integer or real) arithmetic (LA) has applications to a broad class of problems (e.g., program verification and synthesis). Satisfiability modulo theory (SMT) solvers excel at deciding satisfiability of the ground (quantifier free) fragment of first order theories (e.g., LA). Other techniques like first order theorem solvers work well for quantified formulae but have limited support for theories. Typically, SMT solvers either perform quantifier elimination, which is often computationally expensive, or heuristically instantiate quantifiers, which is sound but incomplete for deciding satisfiability [19]. Recently, decision procedures have been developed to check satisfiability of quantified LA formulae directly [4, 5, 8, 18]. Notably, both Bjørner and Janota [4]’s and Farzan and Kincaid [8]’s decision procedures are based on the game semantics of first-order logic.

The game semantics of first-order logic gives meaning to a formula as a two player game [12]. Every (LA) formula induces a game between two players, SAT and UNSAT. SAT tries to prove the formula satisfiable, while UNSAT tries to prove it unsatisfiable. The players take turns instantiating quantifiers or choosing a sub-formula of boolean connectives. SAT controls existential quantifiers and disjunctions, while UNSAT controls universal quantifiers and conjunctions. SAT wins the game if the chosen model satisfies the chosen sub-formula; otherwise, UNSAT wins. A (LA) formula is satisfiable exactly when SAT has a winning strategy—a function determining how SAT should instantiate existential quantifiers and choose sub-formulae of disjuncts to prove the formula satisfiable—to the induced game.

The game-theoretic view of formulae suggests a variation of the satisfiability problem, in which the goal is not (just) to check satisfiability of a formula, but to synthesize a winning strategy for one of the two players. Strategy synthesis can be used as a decision procedure, but can also used for other tasks where a simple yes or no is insufficient (e.g., program synthesis, angelic symbolic execution, or invariant generation).

While the game semantics of first-order logic gives meaning to both quantifiers and connectives, both Bjørner and Janota [4]’s and Farzan and Kincaid [8]’s decision procedures only make use of the game semantics of quantifiers. To do so, both techniques require the input formula to be in prenex normal form—the formula is a sequence of quantifiers followed by a quantifier free formula. While any formula may be converted into a prenex normal form, doing so is undesirable for two reasons: (1) conversion to prenex normal form may increase the number of quantifier alternations within the formula and (2) conversion to prenex normal form may change the game semantics of the formula. Since prenex conversion does not preserve game semantics, it cannot be used in applications that rely on strategy synthesis rather than a yes/no answer.

Existing techniques for checking satisfiability of LA formulas are incapable of producing strategies for both quantifiers and Boolean connectives [4, 5, 8, 18]. While both Bjørner and Janota [4]’s and Farzan and Kincaid [8, 9] use the game semantics of LA formulas, they limit their scope to quantifiers via conversion to prenex normal form. Furthermore, the procedure by Bjørner and Janota does not produce an explicit term used to instantiate quantifiers. On the other hand, while the techniques of both Reynolds et al. [18] and Bonacina et al. [5] exploit the fine-grained (quantifier and Boolean connectives) structure of formulas, they do not produce a winning strategy.

This paper presents a decision procedure for checking satisfiability of quantified LA formulae that exploits the fine-grained structure of a formula to produce a winning strategy for SAT or UNSAT for both quantifiers and Boolean connectives. Our technique Fine-grained Strategy Improvement uses the fine-grained structure of LA formulas to formulate a recursive procedure that iteratively improves a candidate strategy via computing winning strategies to induced sub-games. We generalize the notion of strategies and counter-strategy computation from Farzan and Kincaid [8] to handle quantifiers and connectives as well as allowing computing counter-strategies with a fixed prefix (to enable the recursive nature of fine-grained strategy improvement). Fine-grained strategy improvement improves upon existing techniques by (1) avoiding conversion to prenex normal form or (2) allowing extraction of a proof object (a winning strategy) that determines exactly how the formula is proven to be (un)satisfiable.

For simplicity, the remainder of this paper provides details for linear rational arithmetic (LRA); however, the algorithmic details and game semantics provided in this paper are directly applicable to any theory that admits an appropriate term-selection function (cf. Sect. 5.1) including linear integer arithmetic (LIA). In Sect. 2, we review the game semantics for linear arithmetic [12], and its relation with LRA satisfiability. Sections 34, and 5 present the procedure to compute winning strategy skeletons, whose existence proves or disproves LRA satisfiability. Section 6 shows how to compute a winning strategy from a strategy skeleton. Sections 7 and 8 compares this work to others. The extended versionFootnote 1 contains implementation details, proofs, and extended experimental results.

2 Fine-Grained Game Semantics for LRA Satisfiability

This section reviews the syntax (Sect. 2.1) of Linear Rational Arithmetic (LRA) and its game semantics (Sect. 2.2).

2.1 Linear Rational Arithmetic

The syntax of LRA is formed from two sets—Terms and Formulas. The grammar for terms and formulae parameterized over a set of variables X is as follows:

$$\begin{aligned} s,t \in {{\textbf {Term}}}(X) &::= c \in \mathbb {Q}~|~x\in X~|~s + t~|~c \cdot t\\ \varphi ,\psi \in {{\textbf {Formula}}}(X) &::= t < 0~|~t = 0~|~\varphi \wedge \psi ~|~\varphi \vee \psi ~|~\forall x.~\varphi ~|~\exists x.~\varphi \end{aligned}$$

Without loss of generality, this paper considers negation free formulae and assumes that every variable bound by a quantifier within a formula to be distinct. For a formula \(\varphi \), \( FV (\varphi )\) denotes its free variables. Similarly, \( FV (t)\) denotes the free variables of term t. A sentence is a LRA formula with no free variables. A ground formula is a quantifier-free formula which may contain free variables.

A valuation, \(M : V \rightarrow \mathbb {Q}\), maps a finite set of variables, \(V \subseteq X\), to the rationals. We use \(\llbracket t \rrbracket ^M\) to denote the value of t within the valuation M—assuming \( FV (t) \subseteq dom (M)\)—with the usual interpretation. \(M \models \varphi \) denotes that M satisfies the formula \(\varphi \) (we say M is a model of \(\varphi \)).

For a valuation M, a variable x, and a rational constant c, \(M\{x \mapsto c\}\) denotes the valuation M except with x mapped to c.

$$M\{x \mapsto c\} \triangleq \lambda y. \text {if}~y = x~\text {then}~c~\text {else}~M(y)$$

For a formula \(\varphi \), variable x, and term t, \(\varphi [x \mapsto t]\) represents the formula obtained by substituting every free occurrence of x with t.

2.2 Fine-Grained Game Semantics

For a more thorough introduction, Hintikka describes the game semantics for first-order formulae [12]. Every LRA sentence defines a satisfiability game, which is played between two players: SAT and UNSAT. The players take turns choosing instantiations for quantifiers and sub-formulae of connectives. SAT controls the choices for existential quantifiers and disjunctions, while UNSAT controls universal quantifiers and conjunctions.

Formally, a state of a LRA-satisfiability game for a LRA-sentence \(\varphi \) is \(G(\psi ,M)\), where \(\psi \) is a sub-formula of \(\varphi \) and M is a valuation. The initial state of the satisfiability game for \(\varphi \) is \(G(\varphi ,\emptyset )\). Below gives the rules of the game with the assumption that \(FV(\psi ) \subseteq dom (M)\).

$$\begin{aligned} G(t < 0, M) &~\text {SAT} \text { wins if}~M \models t < 0~\text {. Otherwise, }\text {UNSAT} \text { wins}. \\ G(t = 0, M) &~\text {SAT} \text { wins if}~M \models t = 0~\text {. Otherwise, }\text {UNSAT} \text { wins}. \\ G(\varphi \wedge \psi , M) &~\text {UNSAT} \text { chooses to either play}~G(\varphi ,M)~\text {or}~G(\psi , M). \\ G(\varphi \vee \psi , M) &~\text {SAT} \text { chooses to either play}~G(\varphi , M)~\text {or}~G(\psi , M). \\ G(\forall x. \varphi , M) &~\text {UNSAT} \text { picks}~c \in \mathbb {Q}~\text {and then plays}~G(\varphi , M\{x \mapsto c\}). \\ G(\exists x. \varphi , M) &~\text {SAT} \text { picks}~c \in \mathbb {Q}~\text {and then plays}~G(\varphi , M\{x \mapsto c\}). \end{aligned}$$

A strategy for SAT or UNSAT determines that player’s next move as a function of all the moves previously played. In the above definition of a LRA-satisfiability game, the state \(G(\psi , M)\) implicitly represents the moves made so far. This is made explicit by representing a play of the game as a sequence of rational numbers (instantiating quantifiers) and the labels \(L\) and \(R\) (choosing the left or right branch of a disjunction or conjunction). For the formula \(\varphi \) and play \(\pi \), we represent the sub-formula and valuation forming the state of the game after playing \(\pi \) as \(\varphi ^\pi \) and \(m^\pi \), respectively. Both are defined as follows:

$$\begin{aligned}\begin{gathered} \begin{aligned} \varphi ^\epsilon &\triangleq \varphi & (\forall x. \varphi )^{c\cdot \pi } &\triangleq \varphi ^\pi & (\exists x. \varphi )^{c\cdot \pi } &\triangleq \varphi ^\pi \\ M^\epsilon &\triangleq \emptyset & M^{c\cdot \pi } &\triangleq M^\pi \{x \mapsto c\} & M^{c\cdot \pi } &\triangleq M^\pi \{x \mapsto c\} \end{aligned}\\ \begin{aligned} (\varphi \wedge \psi )^{L \cdot \pi } &\triangleq \varphi ^\pi & (\varphi \wedge \psi )^{R \cdot \pi } &\triangleq \psi ^\pi & (\varphi \vee \psi )^{L \cdot \pi } &\triangleq \varphi ^\pi & (\varphi \vee \psi )^{R \cdot \pi } &\triangleq \psi ^\pi \\ M^{L \cdot \pi } &\triangleq M^\pi & M^{R \cdot \pi } &\triangleq M^\pi & M^{L \cdot \pi } &\triangleq M^\pi & M^{R \cdot \pi } &\triangleq M^\pi \\ \end{aligned} \end{gathered}\end{aligned}$$

If \(\varphi ^\pi \) does not evaluate using the above rules, then \(\pi \) is an illegal play and \(\varphi ^\pi \) is undefined. In the remainder of this paper, we use “play” to mean “legal play.” A play \(\pi \) is complete when \(\varphi ^\pi \) is an atom (neither player has any move to make). For any complete play \(\pi \), SAT wins if and only if \(M^\pi \models \varphi ^\pi \). Similarly, UNSAT wins if and only if \(M^\pi \not \models \varphi ^\pi \).

For any formula \(\varphi \), \(\lnot \varphi \) denotes the negation-free formula equivalent to the negation of \(\varphi \). The sentence \(\lnot \varphi \), induces the dual satisfiability game of \(\varphi \) – a game played in the same manner as \(\varphi \) but with the roles of SAT and UNSAT swapped. This duality is used to define terminology and algorithms explicitly for SAT and implicitly for UNSAT as the corresponding SAT version for \(\lnot \varphi \).

Definition 1

(Strategy). Let \(\mathcal {M} = \mathbb {Q} \cup \{L, R \}\) be the set of all moves, \(f : \mathcal {M}^* \rightarrow \mathcal {M}\) be a partial function from sequences of moves to a move, and \(\pi \) a sequence of moves. The play \(\pi \) conforms to f exactly when \(\pi _i = f(\pi _1, \dots , \pi _{i-1})\) whenever \(f(\pi _1, \dots , \pi _{i-1})\) is defined.

Let \(\varphi \) be a LRA-sentence, a SAT strategy for \(\varphi \) is a partial function \(f : M^* \rightarrow M\), which has the property that for any play \(\pi \) that conforms to f, (1) if \(\varphi ^\pi \) is \(F \vee G\) then \(f(\pi )\) is defined and \(f(\pi ) \in \{L,R \}\) and (2) if \(\varphi ^\pi \) is \(\exists x.F\) then \(f(\pi )\) is defined and \(f(\pi ) \in \mathbb {Q}\).

The SAT strategy f is winning if every complete play that conforms to f is won by SAT. It is well-known that \(\varphi \) is satisfiable if and only if SAT has a winning strategy.

3 Fine-Grained Strategy Skeletons

This section defines fine-grained SAT strategy skeletons that form the basis of our fine-grained strategy improvement algorithm (cf. Algorithm 1). A SAT strategy skeleton is an abstraction that represents multiple possible strategies that SAT may choose. Recall that in Sect. 2.2, we defined strategies to be a function that maps a play of a satisfiability game to the next move of the game. A strategy skeleton similarly maps a play of the satisfiability game to a finite set of possible moves to play next. At a high-level, the strategy improvement algorithm iteratively finds better and better strategy skeletons via the computation of counter-strategy skeletons (cf. Sect. 5).

Example 1

To illustrate fine-grained strategy skeletons and the algorithms presented in this paper consider the formula \(\varphi \) which we use as a running example throughout this paper:

$$\begin{aligned} \varphi \triangleq \forall x, z.~(x = z \vee (\exists y.~(x < y \wedge y < z) \vee (z < y \wedge y < x))) \end{aligned}$$
figure b

The formula \(\varphi \) expresses the fact that for any pair of rational numbers x and z, either x and z are equal or there is some value y between x and z. To the right, we display a SAT strategy skeleton for \(\varphi \) which we call S. The two \(\bullet \) symbols act as placeholders for the values chosen by UNSAT for the quantified variables x and z. The skeleton encodes that no matter what values (\(\bar{x}\) and \(\bar{z}\)) UNSAT chooses to instantiate x and y with, SAT chooses to play the left branch of the disjunction leading to the atom \(x = z\)—at the end of the path we display \(\bar{x} = \bar{z}\), which is this atom after substituting the placeholder values for UNSAT ’s choice for the formally bound variables.

As seen in Examples 1 and 4, SAT skeletons are tree-like structures that follow the structure of \(\varphi \). Formally, SAT strategy skeletons for a LRA-satisfiability game \(\varphi \), are represented as a set of paths. We use \( SKEL (\varphi , \textit{vars})\) to denote the set of SAT strategy skeletons for \(\varphi \) whose terms may range over the set of variables \(\textit{vars} \). For a sub-skeleton of a sentence, \(\textit{vars} \) represents the set of variables that in-scope in \(\varphi \). The set of strategy skeletons for a sentence is thus \( SKEL (\varphi , \emptyset )\). For a set of paths S, \(\ell \cdot S = \{\ell \cdot \pi : \pi \in S \}\) denotes the set obtained by prepending each path in S with the label \(\ell \). Similarly, we define \(\pi \Downarrow S = \{\pi ' : \pi \cdot \pi ' \in S\}\) to be the set of suffixes of \(\pi \) appearing in S. Formally, a skeleton is a subset of \(({\textbf {Term}}(X) \cup \{\bullet , L, R \})^*\) (whose specific form depends on the formula \(\varphi \)). We define \( SKEL \) as the least solution to the following set of rules:

figure c

Just as strategies can be thought of as a collection of plays, strategy skeletons can be thought of as a collection of strategies. Similar to strategies and plays, we can determine when a strategy conforms to a strategy skeleton. We say a SAT strategy f conforms to a strategy skeleton S when every complete play \(\pi \) conforming to f conforms to S. A play \(\pi \) conforms to S, if there is some path \(\rho \in S\) such that \(|\pi | = |\rho |\) and for each i we have (1) \(\varphi ^{\pi _0,\dots ,\pi _{i-1}} = \exists x. \psi \) for some \(\psi \) and \(\llbracket x \rrbracket ^{M^{\pi }} = \llbracket \rho _i \rrbracket ^{M^\pi }\), or (2) \(\varphi ^{\pi _0,\dots ,\pi _{i-1}}\) is a disjunctive or conjunctive formula and \(\pi _i = \rho _i\), or (3) \(\varphi ^{\pi _0,\dots ,\pi _{i-1}}\) is a universally quantified formula and \(\rho _i = \bullet \). A strategy skeleton is winning if there is a winning strategy that conforms to it.

In order to develop a decision procedure that produces a winning strategy skeleton, we first turn to the problem of determining if a SAT skeleton S for the LRA satisfiability game \(G(\varphi , M)\) is winning. To determine if S wins the game \(G(\varphi , M)\) we check if the losing formula \(\text {lose}(S, \varphi )\) is not satisfied by M (i.e., S wins \(G(\varphi , M)\) if \(M \not \models \text {lose}(S, \varphi )\)). This formulation results in a formula that is existentially quantified and can be easily Skolemized to a quantifier free formula and checked with an off-the-shelf SMT solver. Furthermore, we show in Sect. 5 that a model of the Skolemized formula can be used to construct an UNSAT strategy skeleton for \(\varphi \) that beats S. We define \(\text {lose}(S,\varphi )\) as follows:

$$\begin{aligned} \text {lose}(\emptyset , \varphi ) \triangleq & \textit{true}\\ \text {lose}(\{\epsilon \}, \varphi ) \triangleq & \lnot \varphi \\ \text {lose}(S, \varphi \vee \psi ) \triangleq & \text {lose}(L \Downarrow S, \varphi ) \wedge \text {lose}(R \Downarrow S, \psi )\\ \text {lose}(S, \varphi \wedge \psi ) \triangleq & \text {lose}(L \Downarrow S, \varphi ) \vee \text {lose}(R \Downarrow S, \psi )\\ \text {lose}(S, \exists x. \varphi ) \triangleq & \bigwedge _{t \cdot \pi \in S} \text {lose}(t \Downarrow S, \varphi )[x \mapsto t]\\ \text {lose}(S, \forall x. \varphi ) \triangleq & \exists x. \text {lose}(\bullet \Downarrow S, \varphi ) \end{aligned}$$

If M satisfies the losing formula \(\text {lose}(S, \varphi )\), then S is not a winning strategy skeleton for the game \(G(\varphi , M)\). Intuitively, this implies that UNSAT can beat SAT if SAT plays according to any strategy conforming to S. We use the intuition to formalize when an UNSAT strategy skeleton U beats the SAT skeleton S.

Definition 2

(Counter Strategy). Fix a LRA-satisfiability game \(\varphi \), play \(\pi \) of \(\varphi \), SAT skeleton S for \(\varphi ^\pi \), and UNSAT skeleton U for \(\varphi ^\pi \). U is a counter-strategy of S (U beats S), if there is some strategy g conforming to U such that for every strategy f conforming to S, UNSAT wins every complete play \(\pi \pi '\) such that \(\pi '\) conforms to both f and g.

Crucially, it cannot be the case that U beats S and S beats U. This asymmetry is ensures that the strategy improvement algorithm makes progress towards verifying or falsifying the formula \(\varphi \).

figure d

Example 2

Recall the initial strategy S from Example 1, in which SAT always chose the branch with the atom \(x = z\) no matter what values UNSAT chose for x and z. The losing formula of S is \( lose (S) \triangleq \bar{x} \ne \bar{z}\) which summarizes the choices of \(\bar{x}\) and \(\bar{z}\) that UNSAT may make to falsify the atom \(x = z\) SAT choose. The losing formula of S is satisfiable—e.g., with the model \(M = \{\bar{x} \mapsto 0, \bar{z} \mapsto 1\}\). Since the losing formula is satisfiable, there must be some counter-strategy that beats S. One such counter-strategy U is depicted to the right—remember that the UNSAT strategy U is a SAT strategy to the formula \(\lnot \varphi \). As in Example 1, U is annotated with additional labels: terms are labeled with the existential quantifier they are instantiating, each \(\bullet \) is annotated with the corresponding Skolem constants from \(\text {lose}(S, \varphi )\), and conjunctions are grouped and highlighted to visually distinguish conjunctive branches from disjunctive branches. Finally, each leaf of the skeleton is labeled with the atomic formula reached after substituting the terms and Skolem constants for each quantified variable.

The skeleton U states that UNSAT will always choose 0 to instantiate x and 1 to instantiate z. If SAT chooses the left branch, then the play is over and UNSAT wins. Otherwise, SAT chooses the right branch and a symbolic value \(\bar{y}\) to instantiate y. Then SAT chooses to either play the left or right branch of the resulting sub-game. If SAT chose left then UNSAT will chose to play the left sub-game and the play ends in the atom \(0 \ge \bar{y}\). Otherwise, when SAT plays the right sub-game, UNSAT chooses to play the resulting left sub-game and play ends in the atom \(1 \ge \bar{y}\).

Proposition 1

Let S be a SAT strategy for the game \(G(\varphi , M)\). S is winning if and only if \(M \models \text {lose}(S, \varphi )\).

figure e

4 Fine-Grained Strategy Improvement

This section presents an algorithm for deciding LRA satisfiability (Algorithm 1). At a high level, the algorithm produces a winning strategy skeleton via fine-grained strategy improvement. Algorithm 1 iteratively improves the current player (SAT)’s strategy. Each iteration attempts to compute a counter-strategy for the opposing player (UNSAT), fixes the opposing player’s initial moves and recursively solves the resulting sub-game. If the opposing player wins the sub-game, then they win the game, and a winning strategy can be constructed using the synthesized initial moves and the winning strategy for the subgame. If the opposing player loses the subgame, the current player’s winning strategy for the subgame is used to improve their strategy. The algorithm then proceeds to the next iteration of the current game and repeats until a winning player can be determined.

Algorithm 1 assumes that UNSAT makes the first move in the game \(G(\varphi , M)\). If SAT would instead play first, Algorithm 1 may be applied to \(\lnot \varphi \) and the result negated. The first step of the strategy improvement algorithm initializes a SAT skeleton. Any SAT skeleton \(S \in SKEL (\varphi , \emptyset )\) may be used.

After initialization, the algorithm will check if a counter-strategy exists (cf. Sect. 5). If there is no counter-strategy, then necessarily SAT ’s current skeleton S must be winning. Otherwise, UNSAT has a counter-strategy U that beats S. The auxiliary function \( peel \) uses \(\varphi \) and U to compute \(\pi '\)—the leading universal and conjunctive moves—and \(U'\)—the remaining skeleton (i.e. \(U = \pi ' \cdot U'\)). The algorithm continues by fixing the moves in \(\pi '\) and having the players swap places while solving the resulting sub-game \(\lnot \varphi ^{\pi '}\). Formally, \( peel \) is defined as follows:

$$\begin{aligned} peel (\forall x. F, U) &\triangleq \left\langle t\cdot \pi , U'\right\rangle {} & {} \text { where } \left\langle \pi , U'\right\rangle = peel (F, U'') \text { and } U = t \cdot U''\\ peel (F \wedge G, U) &\triangleq \left\langle L \cdot \pi , U'\right\rangle {} & {} \text { where } \left\langle \pi , U'\right\rangle = peel (F, U'') \text { and } U = L \cdot U''\\ peel (F \wedge G, U) &\triangleq \left\langle R \cdot \pi , U'\right\rangle {} & {} \text { where } \left\langle \pi , U'\right\rangle = peel (G, U'') \text { and } U = R \cdot U''\\ peel (\varphi , U) &\triangleq \left\langle \epsilon , U\right\rangle {} & {} \text { otherwise} \end{aligned}$$

By construction, the leading UNSAT moves of a counter-strategy must form a single path—Algorithm 3 only chooses a single term or conjunct when constructing a counter-strategy. This ensures that \( peel \) is properly defined. After peeling off the leading universal and conjunctive moves from U, the algorithm recursively solves the resulting sub-game (from the point-of-view of UNSAT by recursing on \(\lnot \varphi ^{\pi '}\) instead of \(\varphi ^{\pi '}\)).

After the recursive call, either SAT or UNSAT has a winning skeleton to \(G(\lnot \varphi ^{\pi '}, M^{\pi '})\). If SAT wins \(G(\lnot \varphi ^{\pi '}, M^{\pi '})\) with the skeleton \(U''\), then UNSAT must win \(G(\varphi ^{\pi '}, M^{\pi '})\) with the UNSAT skeleton \(U''\). Since UNSAT controls the initial moves \(\pi '\), we may conclude that UNSAT wins the entire game \(G(\varphi , M)\) and return the winning UNSAT skeleton \(\pi ' \cdot U''\).

Otherwise, UNSAT wins \(G(\lnot \varphi ^{\pi '}, M^{\pi '})\) with the skeleton \(S'\). The sub-skeleton \(S'\) can be extended to counter U by prepending every path of \(S'\) with the “negation” of \(\pi '\) the initial moves of UNSAT. Note that by construction, \(\pi '\) consists only of terms instantiating universal quantifiers or \(L \) or \(R \) denoting a choice of a conjunctive branch. We define the negation of \(\pi '\) as follows: each term in \(\pi '\) is replaced with a \(\bullet \)—i.e. \((\lnot \pi ')_i = \pi '_i\) if \(\pi '_i \in \{L,R \}\), otherwise \((\lnot \pi ')_i = \bullet \). Technically, \((\lnot \pi ') \cdot S'\) is not a skeleton when \(\pi '\) contains conjunctive moves—the resulting set of paths only covers one of the branches, while a SAT skeleton must cover both branches of a conjunction—however, when unioned with S the initial skeleton for SAT, the final result is a skeleton that counters U.

Example 3

 

figure f

Continuing Example 1, let us suppose that we begin Algorithm 1 with the SAT strategy skeleton S depicted in Example 1 (which simply takes the left branch of the disjunction). S is not winning, since the UNSAT player may choose different values for x and z to invalidate the equality—one such counter-strategy U for S appears above. After using \( peel \) to construct \(\pi '\) and \(U'\), Algorithm 1 recursively solves the sub-game \(\lnot \varphi ^{\pi '} \triangleq x \ne z \wedge \forall y.~(x \ge y) \vee (y \ge z) \wedge (z \ge y) \vee (y \ge x)\) starting with \(U'\) and the model \(M^{\pi '} = \{x \mapsto 0, z \mapsto 1\}\). The sub-game is played with the role of the two players switched—the recursive call uses the formula \(\lnot \varphi ^{\pi '}\) rather than \(\varphi ^{\pi '}\)—thus, \(U'\) is a SAT skeleton for the resulting sub-game and the assumption that the top-level connective of \(\varphi \) is controlled by UNSAT is maintained.

The recursive call returns that UNSAT won the game \(G(\lnot \varphi ^{\pi '}, M^{\pi \cdot \pi '})\) with the skeleton \(S'\). The skeleton \(S'\) will instantiate y with the average of x and z and chose the left disjunct \(x < y < z\), which clearly beats \(U'\) when x is 0 and z is 1.

figure g

While \(S_1\) counters U, it is not yet winning. SAT will lose any play where UNSAT instantiates x and z such that \(z < x\). On the next iteration of the game the algorithm finds \(U_1\) a counter-skeleton to \(S_1\). Just as before the procedure splits apart \(U_1\) and solves the induced sub-game. The procedure finds that SAT wins the sub-game with the skeleton \(S'_1\). The new skeleton is extended and combined with \(S_1\) to form \(S_2\) and the current game \(\varphi \) continues starting from \(S_2\); however, on the next iteration, the procedure determines that \(S_2\) has no counter-strategy and is thus a winning SAT skeleton for the game \(\varphi \).

Theorem 1

Algorithm 1 is a decision procedure for LRA satisfiability.

figure h

5 Computing Counter-Strategies

When a strategy skeleton is not winning—its losing formula is satisfiable—the opposing player must have a counter-strategy that beats every strategy that conforms to the strategy skeleton. Given a model of the losing formula, this section shows how to construct such a counter-strategy skeleton.

At a high level, Algorithm 1 uses Algorithm 2 to (1) determine if a strategy skeleton S is winning and (2) if S is not winning to return a counter-strategy U that beats S (and returning none if S is winning). Given a LRA satisfiability game \(G(\varphi , M)\) and skeleton S, Algorithm 2 computes (a formula equisatisfiable to) \(\text {lose}(S, \varphi )\), then uses Algorithm 3 to synthesize a counter-strategy to S if \(\text {lose}(S, \varphi )\) is satisfied by M.

Algorithm 2 first constructs the losing formula. The first step of which introduces a new Herbrand constant for each path to a universal quantifier and a fresh Boolean variable for each path to a conjunct within \(\varphi \). The produced formula is equisatisfiable to the losing formula described in Sect. 3. By existentially quantifying the introduced Boolean variables and Skolem constants \( lose \) becomes logically equivalent to \( lose (S, \varphi )\). The introduced Boolean variables enable an explicit encoding of UNSAT ’s choice of conjunct within the losing formula. This allows a model of the losing formula (if one exists) to explicitly track which branch UNSAT has a counter-strategy for. The algorithm computes the losing formula on a path-by-path basis. It does so by computing when SAT could win the given path and taking its negation (i.e., \((\lnot win ) \wedge conds \)). We use \( win \) to denote if the path could be won by SAT—note that for conjunctions SAT must be able to win both of the conjuncts—and \( conds \) to constrain the introduced Boolean variables (i.e., \(H(\pi )\) represents if the sub-skeleton rooted at \(\pi \) is winning). After constructing \( lose \), Algorithm 2 checks if the formula is satisfiable. If \( lose \) is unsatisfiable, then S is a winning skeleton for the (sub-)game \(\varphi \) and has no counter-strategy. Otherwise, there is a model of \( lose \), that can be used with Algorithm 3 to produce an UNSAT skeleton that beats S.

figure i

Algorithm 3 recursively decomposes S and \(\varphi \) to produce a counter-strategy. Before recursing, a model of the bound variables (\(M^\pi \)) and the path-prefix \(\pi \) is constructed. For universals, the valuation is extended using the model of \( lose \), and for existentials the valuation is extended by evaluating the term instantiating the quantifier using the model of the previously bound variables. To ensure that the recursive call produces a counter-skeleton that beats the sub-skeleton of S, M must be a model of the losing formula of the sub-skeleton. Whenever \(\varphi \) is not conjunctive this is trivially true, as \(lose(\varphi , S)\) is a conjunction of the losing formulae for the sub-skeletons. This ensures that the model of the parent formula is also a model of all sub-formulae. In the case when \(\varphi \) is a conjunction, the Boolean variables introduced to construct the losing formula determine which of the subformulae are also modeled by M. If the introduced Boolean variable for a given conjunct is false in the model M, then it must be that the given conjunct also evaluates to false in the given model. This condition ensures that M is also model of the losing formula of the sub-skeleton and thus that the recursive call computes a counter-strategy to the given sub-skeleton. The algorithm then goes back up and constructs a counter-strategy. For atomic formula, there is only one possible strategy, the empty strategy. For conjuncts, the counter-strategy simply extends a counter-strategy for the left or right branch of the conjunct depending on which branch has a counter-strategy in model M—it is possible both have a counter strategy in model M, taking either or both counter-strategies produces a counter-strategy. For disjunctions, a counter-strategy combines a counter-strategy for both the left and right disjuncts. If the strategy S only takes one of the two branches, then any skeleton for the disjunct may be returned. For universal quantifiers, we use model based term selection to select a term t to instantiate x such that t satisfies the same atoms of G within the given module \(M^{\pi \bullet }\) (cf. Sect. 5.1). For existentials, we construct a counter-strategy as the union of a counter-strategy for each choice SAT had made.

5.1 Term Selection

When generating a counter-strategy, Algorithm 3 makes use of the auxiliary function select to select a term t to instantiate x. The function select is a (model-guided) term selection function [8].

Given a formula F, a variable \(x \in FV(F)\) free in F, and a model \(M \models F\) of F, we require select(MxF) to return a term t over the free variables of F excluding x (i.e., \(FV(t) \subseteq FV(F) \setminus \{ x \}\)) such that M satisfies F when t is substituted for x (i.e., \(M \models F[x \mapsto t]\)). Furthermore, to ensure that Algorithm 1 is a decision procedure, we require that for any formula F and variable \(x \in FV(F)\) select has finite image (i.e., the set \(\{{\textbf {select}}(M, x, F) : M \models F\}\) is finite).

For LRA, we define select as follows. Without loss of generality, we assume that any atom of F that contains x is written as \(x = s\), \(x < s\), or \(x > s\) for some s. Let EQ(MxF) contain the term s if and only if \(x = s\) is an atom of F and \(\llbracket x \rrbracket ^M = \llbracket S \rrbracket ^M\). Similarly UB(MxF) contains the term s if and only if \(x < s\) is an atom of F and \(\llbracket x \rrbracket ^M < \llbracket s \rrbracket ^M\). Finally, let LB(MxF) contain the term s if and only if \(x > s\) is an aotm of F and \(\llbracket x \rrbracket ^M > \llbracket s \rrbracket ^M\). Furthermore, if EQ(MxF) is not empty, let eq(MxF) be any \(s \in EQ(M, x, F)\). If UB(MxF) is not empty, then let lub(MxF) be a term \(s \in UB(M, x, F)\) such that for any other \(s' \in UB(M, x, F)\), \(\llbracket s \rrbracket ^M \le \llbracket s' \rrbracket ^M\). Similarly, if LB(Mxf) is not empty, then let glb(MxF) be a term \(s \in LB(M, x, F)\) such that for any other \(s' \in LB(M, x, F)\), \(\llbracket s \rrbracket ^M \ge \llbracket s' \rrbracket ^M\).

$$\begin{aligned} {\textbf {select}} = {\left\{ \begin{array}{ll} eq(M, x, F) &{} \text { if }EQ(M, X, F) \ne \emptyset \\ \frac{1}{2}(lub(M, x, F) + glb(M, x, F)) &{} \text { if } UB(M, x, F) \ne \emptyset \text { and } LB(M, x, F) \ne \emptyset \\ lub(M, x, F) - 1 &{} \text { if } UB(M, x, F) \ne \emptyset \\ glb(M, x, F) + 1 &{} \text { if } LB(M, X, F) \ne \emptyset \\ 0 &{} \text { otherwise} \end{array}\right. } \end{aligned}$$

For further details on model-guided term selection (including term selection for linear integer arithmetic), we refer the reader to Farzan and Kincaid [8]. While this paper so far has focused on satisfiability of LRA, we note that Algorithm 1 is a decision procedure for any theory that admits a term selection function with finite image. In fact, the only change required is to swap select with an appropriate term selection function for the desired theory.

6 Synthesizing Fine-Grained Strategies

Section 4 presents an algorithm that computes a winning strategy skeleton that either proves or refutes satisfiability of a LRA sentence. This section shows how to generalize the technique of [8] to compute a winning fine-grained strategy from a winning fine-grained strategy skeleton. As described in Sect. 2 a SAT strategy is a function from plays to either a rational number (for existential quantifiers) or the labels \(L\) and \(R\) (for disjunctions).

Strategies vs Skeletons. In Sects. 34, and 5, our techniques and discussion focused on how to compute a winning strategy skeleton. While computing a winning strategy skeleton is sufficient to determine satisfiability of a formula, it may be insufficient for other tasks. For example for use in program verification and synthesis tasks (e.g., to determinize non-deterministic choices, synthesize safety conditions, etc.). By definition, a strategy skeleton S is winning if some strategy g that conforms to S is winning.

Computing Winning Strategies. This section focuses on how to extract a winning strategy from a winning strategy skeleton S for the game \(G(\varphi , M)\). To do so, we construct a system of constrained horn clauses (CHCs) whose solution we use to produce a winning strategy from a winning skeleton. The produced CHC rules represent when the strategy skeleton is losing. Since the strategy skeleton is winning, the rules are satisfiable and a model satisfying the CHCs exists. The process starts by labeling each leaf of S with the atom reached, substituting each of the terms instantiating existential quantifiers. Formally, for any path \(\pi \) of S (from the root to a leaf), we label the leaf (rooted at \(\pi \)) with the formula \( subst _\varphi (\lnot \varphi ^\pi , \pi )\). The function \( subst _{\varphi }\) applies a substitution based on the given path in reverse order of the appearance of each existential quantifier.

$$\begin{aligned} subst _\varphi (G, \epsilon ) \triangleq ~& G&\\ subst _\varphi (G, \pi \cdot L) \triangleq ~& subst _\varphi (G, \pi )&\\ subst _\varphi (G, \pi \cdot R) \triangleq ~& subst _\varphi (G, \pi )&\\ subst _\varphi (G, \pi \bullet ) \triangleq ~& subst _\varphi (G, \pi ) &\\ subst _\varphi (G, \pi \cdot t) \triangleq ~& subst _\varphi (G[x \mapsto t], \pi ) &\text {where}~\varphi ^\pi = \exists x. F\\ \end{aligned}$$

For a strategy skeleton S, define its nodes \(N = \left\{ \pi : \exists \pi '.~ \pi \pi ' \in S \right\} \) to be prefixes of paths in S. Furthermore, define \(\textit{Succ} : N \rightarrow 2^N\) to the set of immediate suffixes. For each node \(\pi \) of S, we introduce an uninterpreted relation \(R_\pi (x_1, \dots , x_n)\) where \(x_1, ..., x_n = FV (\varphi ^\pi )\) are the free variables of \(\varphi ^\pi \). We produce the following rules:

$$\begin{aligned} subst _\varphi (\lnot \varphi ^\pi , \pi ) \Rightarrow & R_\pi (\dots ) \,\, \text {if }\varphi ^\pi \text { is atomic}\\ \bigg (\bigvee _{\pi ' \in Succ (\pi )}{R_{\pi '}(\dots )} \bigg ) \Rightarrow & R_\pi (\dots ) \,\,\text {if }\varphi ^\pi \text { is conjunctive}\\ \bigg (\bigwedge _{\pi ' \in Succ (\pi )}{R_{\pi '}(\dots )}\bigg ) \Rightarrow & R_\pi (\dots ) \,\, \,\,\,\, \text {otherwise}\\ R_\epsilon (x_1, \dots , x_n) \Rightarrow & x_1 \ne M(x_1) \vee \dots \vee x_n \ne M(x_n) \end{aligned}$$

For each \(\pi \in N\), \(R_\pi \) represents the set of all \(M'\) such that \(\pi \Downarrow S\) loses the game \(G(\varphi ^\pi , M')\). Of note, the last rule requires that \(R_\epsilon \) does not contain M (i.e., that S must win the game \(G(\varphi , M)\)). Since the overall skeleton is winning, the rules are satisfiable. A solution for each relation \(R_\pi \) may be computed using an off-the-shelf CHC solver. Applying the negated solution as a guard for each path of the skeleton, produces a winning strategy. Technically, the guards should be determinized to produce a function; however, any such determinization will result in a winning strategy. Formally, for each node \(\pi \) such that \(\varphi ^\pi \) is an existential or disjunctive formula, we produce the function:

$$\begin{aligned} f_\pi (x_1, ..., x_k)~\text {if}~\lnot R_{\pi '_1}~\text {then}~l_1~\text {elif}~\dots ~\text {else}~l_m \end{aligned}$$

where \(FV(\varphi ^\varphi ) \triangleq \left\{ x_1, \dots , x_k \right\} \), \( Succ (\pi ) \triangleq \left\{ \pi '_1, \dots , \pi '_m \right\} \), and where each child \(\pi '_i\) is reached with label \(l_i\) in S. Furthermore, for each path \(\pi \in (\mathbb {Q} \cup \left\{ L, R \right\} )^*\) such that \(\varphi ^\pi \) is an existential or disjunctive formula, define \(f'(\pi )\) to be \(f_\pi (c_1, \dots , c_n)\) where each \(c_i = M^\pi (x_i)\) for each free variable \(x_i \in FV(\varphi ^\pi )\). Finally define \(f(\pi )\) to be \(f'(\pi )\) if \(f'(\pi ) \in \left\{ L, R \right\} \) and otherwise define \(f(\pi )\) to be \(\llbracket f'(\pi ) \rrbracket ^{M^\pi }\). The function f is a strategy conforming to S that wins the game \(G(\varphi , M)\).

Consider the winning skeleton \(S_2\) from Example 3. The left side of Example 4 shows the set of rules to label \(S_2\), depicted as a tree (whose shape follows exactly from the shape of \(S_2\)). The graph should be interpreted as saying that a node’s label is implied by the combination of each of its children’s labels. For nodes 5 and 6, the labels of its children should be combined using disjunctive, otherwise, the label of its children should be combined conjunctively. For example, the rule for node 1, should be read as \(R_2(\bar{x},\bar{z}) \wedge R_3(\bar{x}, \bar{z}) \Rightarrow R_1()\), while the rule for node 5 should be read as \(R_7(\bar{x},\bar{z}) \vee R_8(\bar{x},\bar{z}) \Rightarrow R_5(\bar{x},\bar{z})\). The middle column of Example 4 shows a possible solution to the set of rules, and the left-hand side shows the winning strategy extracted from \(S_2\) using the given solution. The strategy \(f_{\bullet \bullet }\) states that given UNSAT ’s choices of \(\bar{x}\) and \(\bar{z}\) to instantiate x and z, if UNSAT chose equal values for x and y then SAT will chose the left branch—which results in SAT ’s immediate win—otherwise SAT will chose to play the right branch. \(f_{\bullet \bullet R}\) and \(f_{\bullet \bullet R \frac{\bar{x}+\bar{z}}{2}}\) are interpreted similarly.

Example 4

 

figure j

7 Experimental Evaluation

We extend the tool SimSat—a prototype implementation of the coarse-grained strategy improvement algorithm from Farzan and Kincaid [9]—with the fine-grained strategy improvement procedureSimSat is implemented in OCaml using Z3 [7] to handle ground formulas.

Our experiments aim to answer the following questions: (1) is fine-grained SimSat competitive with state-of-the-art SMT solvers? (2) how much of the difference between coarse-grained SimSat and fine-grained SimSat is driven by considering non-prenex normal form formulas and how much is due to the new strategy improvement algorithm? (3) what is the overhead of computing a winning fine-grained strategy after checking satisfiability of a formula?

We compare fine-grained SimSat to coarse-grained SimSat as well as to Z3 (version 4.11.2) [7], CVC5 (version 1.0.0) [1], and YicesQS [11]. Z3 implements the procedure from Bjørner and Janota [4], CVC5 implements the procedure from Reynolds et al. [18], and YicesQS implements the procedure from Bonacina et al. [5].

We evaluate each tool on three suites of benchmarks: SMT-LIB2, Termination, and Simulation. Each benchmark is described in detail below. All experiments were conducted on a desktop running Ubuntu 18.04 LTS equipped with a 4 core Intel(R) Xeon(R) processor at 3.2GHz and 12 GB of memory. Each experiment was allotted a maximum of five minutes to complete.

To answer (1), fine-grained SimSat is compared to coarse-grained SimSat, CVC5, YicesQS, and Z3. This section does not consider other solvers and methods (e.g. quantifier elimination) as Reynolds et al. [18], Bjørner and Janota [4], and Bonacina et al. [5] show that their methods outperform other existing solvers and methods for quantified LIA and LRA formulas. To answer (2) we consider three variants of fine-grained SimSat. The first variant, “prenex,” first converts the input formula to prenex-normal form before running the decision procedure. The second variant, “miniscope,” miniscopes (reduces the scope of quantifiers) the formula before running the decision procedure, and the final variant, “fine,” applies the decision procedure without modifying the input formula. Finally, to answer (3) we wish to measure the efficacy of our algorithm for strategy synthesis, but we know of no other algorithm capable of synthesizing strategies for fine-grained games with which to establish a baseline. Instead, we measure the overhead of synthesizing a strategy on top of synthesizing a strategy skeleton.

SMT-LIB2. This suite of benchmarks consists of 2419 LRA and 616 LIA benchmarks. All benchmarks come from SMT-LIB2 [3]. All LIA benchmarks come from industrial problems. The LRA benchmarks consists of 1800 randomly generated formulas in prenex normal form with varying quantifier depth (see Monniaux [15] for detailed descriptions) and 619 industrial benchmarks.

Termination. This suite of benchmarks consists of 200 LIA formulas. The formulas are derived from Zhu and Kincaid’s [21] method for proving termination of programs (see Sects. 5 and 6 for details on how formulas are constructed). Each formula encodes a sufficient condition for which a program is terminating—the program terminates if the formula is valid. The suite of benchmark consists of a formula for each program in the “polybench” and “termination” benchmarks from Zhu and Kincaid’s evaluation section [21].

Simulation. This suite of benchmarks consists of 2060 LIA formulas. The formulas represent when the state of two integer message passing programs are weakly similar for the next n instructions. For complete details see Chap. 4 of [16], which uses fine-grained strategy synthesis to determinize angelic choice when proving weak simulation between two integer message passing programs.

Fig. 1.
figure 1

(a) A cactus plot showing x instances solved within y seconds per solver. (b) Log scale plot of strategy synthesis time (y-axis) vs satisfiability time (x-axis) (Color figure online)

Results. Table 1 and Figs. 11b summarize the results of the experiments. Figure 1a is a cactus plot. Each line represents a solver’s performance. Each point (xy) within the line for a solver represents that x instances were individually solved in under y seconds by the given solver. The closer the line is to the x-axis the better the solver performed. Table 1 breaks down the results a little further by (sub-)suite of benchmarks. Each entry shows the number of instances from the given suite of benchmarks by the given solver. The “Any” column counts the number of instances solved by any of the solvers, while the “All” column counts the number of instances solved by every solver. The “total” column details the total number of instances (solved and unsolved) within the suite of benchmarks. For each suite of benchmark, bolded values highlight which solver(s) solved the most instances of that set of benchmarks.

Overall, all solvers performed well. In fact, Fig. 1a shows that all solvers solved the first 4200 instances in under a second. The Figure zooms into the x-axis after this point to highlight the differences between solvers. The experiments show that the SimSat variants all behaved similarly and out-performed CVC5, Z3, and YicesQS overall. Of the SimSat variants, the miniscoped variant performed best, followed by the normal fine-grained variant, then the fine-grained prenex variant and lastly the coarse-grained variant.

Looking into each suite of benchmarks further, Table 1 shows that while YicesQS solved the fewest instances overall, it actually solved all LRA formulas. Similarly, while CVC5 performed the worst on LRA, it was the best performer on LIA instances, solving 23 more LIA instances than the miniscoped SimSat variant—the next best performer. In both scenarios, the miniscoped SimSat variant placed a close second. CVC5 performed well on the industrial benchmarks; however, struggled with the randomly generated Mjollnir benchmarks, perhaps due to the bottom-up instantiation strategy of its implemented decision procedure [18]. YicesQS excelled at the LRA formulas but failed to solve many of the simulation and termination benchmarks. Overall, Z3 performed well but struggled more with the UltimateAutomizer and Termination benchmarks—benchmarks where conversion to prenex normal form increased quantifier alternations significantly.

Table 1. Number of instances solved per suite of benchmarks—UltimateAutomizer, psyco, tptp, Mjollnir, keymaera, and Scholl are sub-categories of SMT-LIB2.

Finally, Fig. 1b summarizes the cost of computing a winning fine-grained strategy after checking satisfiability of the given formula—i.e. how much time in seconds did it take to compute a winning strategy from a winning strategy skeleton. Figure 1b, plots a point for each formula within the Simulation benchmark. A point has four associated values: (1) its x position represents how much time is required to prove the formula Sat or Unsat (e.g. time to run “Fine” SimSat variant), (2) its y position represents the amount of time in seconds required to compute a winning strategy from a winning strategy skeleton, (3) its size visually quantifies the number of AST-nodes within the produced winning strategy, and (4) a node is blue if the formula is won by SAT and red if it is won by UNSAT. The smallest computed strategy consisted of a single node (move), while the largest strategy consisted of 448 nodes. Across all instances, the it took roughly 18.4% extra time to additionally compute a winning strategy over just determining satisfiability of a formula. The maximum time to compute a strategy is 1.4 s.

8 Discussion and Related Works

The closest techniques to Algorithm 1 are the QSMA algorithm of Bonacina et al. [5] and the coarse-grained strategy improvement algorithm of Farzan and Kincaid [8]. Fine-grained and course-grained strategy improvement algorithms are similar in that they both use model-based term selection to synthesize counter-strategies to find better and better strategies for each player; however, they differ in a few key ways. Fine-grained strategy synthesis works for formulae that are not in prenex normal form. Additionally, while the coarse-grained strategy improvement iterates between skeletons for the two players computing a counter-strategy to the previous player’s most recent skeleton, the fine-grained strategy improvement algorithm chooses a sub-game to focus on and solve before returning to the current game. The coarse-grained algorithm iterates over “global” strategies, where the fine-grained algorithm builds up a strategy by recursively solving sub-games. While Algorithm 1 and QSMA share a similar high-level recursive structure and used model-based techniques, the method of solving sub-formulae differ. The QSMA algorithm uses over- and under-approximations to abstract quantified sub-formulae when determining satisfiability of the current formula whereas Algorithm 1 uses winning strategies of sub-games and model-based term selection to synthesize counter-strategies and ultimately yield a winning strategy to the current formula.

Algorithm 1 also shares some similarities with QSAT the quantified satisfiability algorithm of Bjørner and Janota [4] which is also based on the game semantics of FOL. For formulas in prenex normal form, QSAT and Algorithm 1 both fix a strategy for the first quantifier and then recursively compute a strategy for the remaining quantifiers and back-tracks if no winning strategy exists for the current player; however, the notion of strategy used differs. In QSAT, a strategy selects a subset of the literals in the formula—whose free variables belong to the prefix of quantifiers already explored—that constrains the possible strategies of the remaining quantifiers.

Finally, Algorithm 1 shares similarities with the counter-example instantiation method of Reynolds et al. [18]. Both methods work for formulas beyond prenex normal form and use model based projection techniques to instantiate quantifiers; however, Algorithm 1 uses a top-down approach to synthesize winning strategies, while counter-example instantiation uses a bottom-up technique to instantiate and eliminate quantifiers one quantifier block at a time.

Other methods for LRA/LIA formulas include heuristic instantiation and quantifier elimination. Heuristic instantiation is sound but incomplete and was traditionally the method of choice for many SMT solvers (e.g. CVC4 [2]). Traditional quantifier elimination methods (e.g. Fourier-Motzkin elimination [14], Ferrante-Rackoff [10], and Weispfenning [20] algorithms for LRA, and Cooper’s algorithm [6], and Pugh’s Omega test [17] for LIA) are sound and complete for LRA/LIA but are extremely costly. Monniaux developed a lazy quantifier elimination method for LRA based on polyhedral projection that performs better in practice [15]. However, Bjørner and Janota show that their algorithm dominates the use of Monniaux’s method as a [4]. Finally, Komuravelli et al. [13] introduced model-based projection which under-approximates quantifier elimination for LA and is closely related to the model-based term selection function we use in Sect. 5.1.