1 Introduction

This paper introduces the first method to efficiently generate and check proofs of unsatisfiability for SAT Modulo Monotonic Theories (SMMT), an important fragment of general SMT. The motivation for this work rests on these premises:

  • Proofs of UNSAT are valuable, for propositional SAT as well as SMT. Obviously, an independently checkable proof increases trust, which is important because an incorrect UNSAT result can result in certifying correctness of an incorrect system. Additionally, proofs are useful for computing abstractions [17, 25, 30] via interpolation in many application domains including model checking [30] and software analysis [23, 29].

  • SMMT is a worthy fragment of SMT as a research target. SMMT [9] is a technique for efficiently supporting finite, monotonic theories in SMT solvers. E.g., reachability in a graph is monotonic in the sense that adding edges to the graph only increases reachability, and an example SMMT query would be whether there exists a configuration of edges such that node a can reach node b, but node c can’t reach node d. (More formal background on SMMT is in Sec. 2.2.) The most used SMMT theories are graph reachability and max-flow, along with bit-vector addition and comparison. Applications include circuit escape routing [11], CTL synthesis [28], virtual data center allocation [12], and cloud network security and debugging [2, 8], with the last two applications being deployed in production by Amazon Web Services (AWS). Indeed, our research was specifically driven by industrial demand.

  • DRAT is a desirable proof format. (Here, we include related formats like DRUP [27], GRIT [19], and LRAT [18]. DRAT is explained in Sec. 2.1.) For an independent assurance of correctness, the proof checker is the critical, trusted component, and hence must be as trustworthy as possible. For (propositional) SAT, the community has coalesced around the DRAT proof format [37], for which there exist independent, efficient proof checkers [37], mechanically verified proof checkers [38], and even combinations that are fast as well as mechanically proven [18]. The ability to emit DRAT proof certificates has been required for solvers in the annual SAT Competition since 2014.

    Unfortunately, DRAT is propositional, so general SMT solvers need additional mechanisms to handle theory reasoning [6]. For example, Z3 [32] outputs natural-deduction-style proofs [31], which can be reconstructed inside the interactive theorem prover Isabelle/HOL [14, 15]. Similarly, veriT [16] produces resolution proof traces with theory lemmas, and supports proof reconstruction in both Coq [1] and Isabelle [4, 5, 21]. As a more general approach, CVC4 [7] produces proofs in the LFSC format [36], which is a meta-logic that allows describing theory-specific proof rules for different SMT theories. Nevertheless, given the virtues of DRAT, SMT solvers have started to harness it for the propositional reasoning, e.g., CVC4 supports DRAT proofs for bit-blasting of the bit-vector theory, which are then translated into LFSC [34], and Otoni et al. [33] propose a DRAT-based proof certificate format for propositional reasoning that they extend with theory-specific certificates. However, in both cases, the final proof certificate is not purely DRAT, and any theory lemmas must be checked by theory-specific certificate checkers.

  • For typical finite-domain theories, defining theory predicates propositionally is relatively straightforward. The skills to design and implement theory-specific proof systems are specialized and not widely taught. In contrast, if we treat a theory predicate as simply a Boolean function, then anyone with basic digital design skills can build a circuit to compute the predicate (possibly using readily available commercial tools) and then apply the Tseitin transform to convert the circuit to CNF. (This is known as “bit-blasting”, but we will see later that conventional bit-blasting is too inefficient for SMMT.)

From a practical, user-level perspective, the contribution of this paper is the first efficient proof-generating method for SMMT. Our method scales to industrial-size instances and generates pure DRAT proofs.

From a theoretical perspective, the following contributions underlie our method:

  • We introduce the notion of one-sided propositional definitions for refutation proof. Having different definitions for a predicate vs. its complement allows for more compact and efficient constructions.

  • We show that SMMT theories expressed in Horn theory enable linear-time (in the size of the Horn definition) theory lemma checking via reverse unit propagation (RUP), and hence DRAT.

  • We propose an on-the-fly transformation that uses hints from the SMMT solver to over-approximate any CNF encoding of a monotonic theory predicate into a linear-size Horn upper-bound, and prove that the Horn upper-bound is sufficient for checking theory lemmas in any given proof via RUP.

  • We present efficient, practical propositional definitions for the main monotonic theories used in practice: bit-vector summation and comparison, and reachability and max-flow on symbolic graphs.

(As an additional minor contribution, we adapt the BackwardCheck procedure from DRAT-Trim [27] for use with SMT, and evaluate its effectiveness in our proof checker.)

We implemented our method in the MonoSAT SMMT solver [10]. For evaluation, we use two sets of benchmarks derived from practical, industrial problems: multilayer escape routing [11], and cloud network reachability [2].Footnote 1 Our results show minimal runtime overhead on the solver (geometric mean slowdown 7.4%, worst-case 28.8% in our experiments), and we generate and check proofs for many problem instances that are otherwise intractable.

2 Background

2.1 Propositional SAT and DRAT

We assume the reader is familiar with standard propositional satisfiability on CNF. Some notational conventions in our paper are: we use lowercase letters for literals and uppercase letters for clauses (or other sets of literals); for a literal x, we denote the variable of x by var(x); we will interchangeably treat an assignment either as a mapping of variables to truth values \(\top \) (true) or \(\bot \) (false), or as a set of non-conflicting (i.e., does not contain both x and its complement \(\bar{x}\)) literals, with positive (negative) literals for variables assigned \(\top \) (\(\bot \)); assignments can be total (assigns truth values to every variable) or partial (some variables unassigned); and given a formula F and assignment M, we use the vertical bar \(F|_{M}\) to denote reducing the formula by the assignment, i.e., discarding falsified literals from clauses and satisfied clauses from the formula. (An empty clause denotes \(\bot \); an empty formula, \(\top \).)

This paper focuses on proofs of unsatisfiability. In proving a formula F UNSAT, a clause C is redundant if F and \(F\wedge C\) are equisatisfiable [26]. A proof of unsatisfiability is simply a sequence of redundant clauses culminating in \(\bot \), but where the redundancy of each clause can be easily checked. However, checking redundancy is coNP-hard. A clause that is implied by F, which we denote by \(F\models C\), is guaranteed redundant, and we can check implication by checking the unsatisfiability of \(F\wedge \overline{C}\), but this is still coNP-complete. Hence, proofs use restricted proof rules that guarantee redundancy. For example, the first automated proofs of UNSAT used resolution to generate implied clauses, until implying \(\bot \) by resolving a literal l with its complement \(\bar{l}\) [20, 39]. In practice, however, resolution proofs grow too large on industrial-scale problems.

DRAT [37] is a much more compact and efficient system for proving unsatisfiability. It is based on reverse unit propagation (RUP), which we explain here.Footnote 2 A unit clause is a clause containing one literal. If L is the set of literals appearing in the unit clauses of a formula F, the unit clause rule computes \(F|_{L}\), and the repeated application of the unit clause rule until a fixpoint is called unit propagation (aka Boolean constraint propagation). Given a clause C, its negation \(\overline{C}\) is a set of unit clauses, and we denote by \(F \vdash _{1}C\) if \(F \wedge \overline{C}\) derives a conflict through unit propagation. Notice that \(F\vdash _{1}C\) implies \(F\models C\), but is computationally easy to check. The key insight [24] behind RUP is that modern CDCL SAT solvers make progress by deriving learned clauses, whose redundancy is, by construction, checkable via unit propagation. Proof generation, therefore, is essentially just logging the sequence of learned clauses leading to \(\bot \), and proof checking is efficiently checking \(\vdash _{1}\) of the relevant learned clauses.

2.2 SAT Modular Monotonic Theories (SMMT)

We define a Boolean positive monotonic predicate as follows:

Definition 1 (Positive Monotonic Predicate)

A predicate \(p :\{0, 1\}^{n} \rightarrow \{0, 1\}\) is positively monotonic with respect to the input \(a_i\) iff

$$\begin{aligned} p(a_1, \ldots , a_{i-1}, 0, a_{i+1}, \ldots ) \implies p(a_1, \ldots , a_{i-1}, 1, a_{i+1}, \ldots ) \end{aligned}$$

The predicate p is a positive monotonic predicate iff p is positively monotonic with respect to every input.

Negative monotonic predicates are defined analogously. If a predicate p is positively monotonic w.r.t. some inputs \(A^+\) and negatively monotonic w.r.t. the rest of inputs \(A^-\), it is always possible to rewrite the predicate as a positive monotonic predicate \(p'\) over input \(A^+\) and \(\{ \overline{a} \mid a \in A^-\}\). For ease of exposition, and without loss of generality, we will describe our theoretical results assuming positive monotonic predicates only (except where noted otherwise).

Given a monotonic predicate p over input A, we will use boldface \(\textbf{p}\) as the predicate atom for p, i.e., the predicate atom is a Boolean variable in the CNF encoding of the theory, indicating whether p(A) is true or not. The theory of \(\textbf{p}\) is the set of valid implications in the form of \(\mathcal {M}_A\Rightarrow \textbf{p}\) where \(\mathcal {M}_A\) is a partial assignment over A.

The following are the most used monotonic theories:

  • Graph Reachability: Given a graph \(G = (V, E)\), where V and E are sets of vertices and edges, the graph reachability theory contains the reachability predicates \(reach_{u}^{v}\) on the input variables \(e_1, e_2 \ldots e_m \in E\), where \(u,v\in V\). The predicate holds iff node u can reach v in the graph G by using only the subset of edges whose corresponding variable \(e_i\) is true. The predicate is positively monotonic because enabling more edges will not make reachable nodes unreachable, and disabling edge will not make unreachable nodes reachable.

  • Bit-Vector Summation and Comparison: Given two bit-vectors (BV) \(\vec {a}\) and \(\vec {b}\), the theory of BV comparison contains the predicate \(\vec {a} \ge \vec {b}\), whose inputs are the bits of \(\vec {a}\) and \(\vec {b}\). The predicate holds iff the value (interpreted as an integer) of \(\vec {a}\) is greater or equal to the value of \(\vec {b}\). The predicate is positively monotonic for the variables of \(\vec {a}\) and negatively monotonic for the variables of \(\vec {b}\), because changing any 0 to a 1 in \(\vec {a}\) makes it bigger, and changing any 1 to 0 in \(\vec {b}\) makes it smaller. Similarly, given two sets of BVs \(\vec {A}\) and \(\vec {B}\), the theory of comparison between sums contains the predicate \(\sum {\vec {A}} \ge \sum {\vec {B}}\) whose inputs are the boolean variables from all BVs in \(\vec {A}\) and \(\vec {B}\). The predicate holds iff the sum of the BVs in \(\vec {A}\) is greater or equal to the sum of the BVs in \(\vec {B}\), and is positively monotonic in \(\vec {A}\) and negatively monotonic in \(\vec {B}\).

  • S-T Max Flow Given a graph \(G = (V, E)\), for every edge \(e\in E\), let its capacity be represented by the BV \(\vec {cap}_{e}\). For two vertices \(s, t\in V\), and a BV \(\vec {z}\), the max-flow theory contains the predicates \(MF_{s}^{t} \ge \vec {z}\) over the input variables \(e_1, e_2 \ldots e_n \in E\) and \(\vec {cap}_{e_1}, \vec {cap}_{e_2} \ldots \vec {cap}_{e_n}\). The predicate holds iff the maximum flow from the source s to the target t is greater or equal to \(\vec {z}\), using only the enabled edges (as in the reachability theory) with their specified capacities.

The SMMT Framework [10] describes how to extend a SAT or SMT solver with Boolean monotonic theories. The framework has been implemented in the SMT solver MonoSAT, which has been deployed in production by Amazon Web Services to reason about a wide range of network properties [2, 8]. The framework performs theory propagation and clause learning for SMMT theories as follows: (In this description, we use P for the set of positive monotonic predicates, and S for the set of Boolean variables that are arguments to the predicates.)

  • Theory Propagation: Given a partial assignment M, let \(M_s\) be the partial assignment over S. The SMMT framework forms two complete assignments of \(M_s\): one with all unassigned s atoms assigned to false (\(M_s^-\)), one with all unassigned s atoms assigned to true (\(M_s^+\)). Since \(M_s^-\) and \(M_s^+\) are each complete assignments of S, they can be used to determine the value of P atoms. Since every \(\textbf{p} \in P\) is positively monotonic, (1) if \(M_s^- \Rightarrow \textbf{p}\), then \(M_s \Rightarrow \textbf{p}\), and (2) if \(M_s^+ \Rightarrow \lnot \textbf{p}\), then \(M_s \Rightarrow \lnot \textbf{p}\). The framework uses \(M_s^-\) and \(M_s^+\) as the under- and over-approximation for theory propagation over P atoms. Moreover, the framework attaches \(M_s\Rightarrow \textbf{p}\) or \(M_s \Rightarrow \lnot \textbf{p}\) as the reason clause for the theory propagation.

  • Clause Learning: For some predicates, a witness can be efficiently generated during theory propagation, as a sufficient condition to imply the predicate p. For example, in graph reachability, suppose \(M_s^- \Rightarrow \mathbf {reach_{u,v,G}}\) for a given under-approximation \(M_s^-\). Standard reachability algorithms can efficiently find a set of edges \(M_s' \subseteq M_s\) that forms a path from u to v. When such a witness is available, instead of learning \(M_s \Rightarrow \textbf{p}\), the framework would use the path witness to learn the stronger clause \(M_s' \Rightarrow \textbf{p}\). Witness-based clause learning is theory specific (and implementation specific); if a witness is not available or cannot be efficiently generated in practice for a particular predicate, the framework will learn the weaker clause \(M_s \Rightarrow \textbf{p}\).

Fig. 1.
figure 1

Overview of Our Proof Generation and Checking Method. Inputs (the problem instance file and the propositional definitions of theory predicates) are colored blue; new and modified components are colored orange. Starting from the top-left is the SMMT problem instance, which is solved by MonoSAT. We extended MonoSAT to emit a DRAT-style proof certificate, consisting of learned (via propositional or theory reasoning) clauses, similar to what is proposed in [33]. The proof certificate is optionally pre-processed by drat-trim-theory, in which we modified the BackwardCheck procedure [27] to perform a backward traversal from the final \(\bot \), outputting a subset of lemmas sufficient (combined with the original clause database) to derive \(\bot \). This is extra work (since a full BackwardCheck is later performed by unmodified drat-trim for the final proof verification at the top-right of the figure), but allows us to avoid verifying some theory lemmas that are not relevant to the final proof. The resulting core lemmas are split between the propositional learned clauses, which go straight (right) to drat-trim, and the theory learned clauses, which are our proof obligations. The heart of our method is the instantiation-based Horn approximation (bottom-center, described in Sec. 4). In this step, we use the proof obligations as hints to transform the pre-defined, propositional theory definitions (bottom-left, examples in Sec. 5) into proof-specific Horn definitions. The resulting proof-specific definitions together with the CNF from the input instance can efficiently verify UNSAT using unmodified drat-trim [37].

3 Overview of Our Method

Most leading SMT solvers, including MonoSAT, use the DPLL(T) framework [22], in which a CDCL propositional SAT solver coordinates one or more theory-specific solvers. A DPLL(T) solver behaves similarly to a CDCL propositional SAT solver — making decisions, performing unit propagation, analyzing conflicts, learning conflict clauses — except that the theory solvers will also introduce new clauses (i.e., theory lemmas) into the clause database, which were derived via theory reasoning, and whose correctness relies on the semantics of the underlying SMT theory. These theory lemmas cannot (in general) be derived from the initial clause database, and so cannot be verified using DRAT. Therefore, the problem of producing a proof of UNSAT in SMT reduces to the problem of proving the theory lemmas.

A direct approach would be to have the SMT solver emit a partial DRAT proof certificate, in which each theory lemma is treated as an axiom. This partial proof is DRAT-checkable, but each theory lemma becomes a new proof obligation. The theory lemmas could subsequently be verified using external (non-DRAT), trusted, theory-specific proof-checking procedures. This is the approach recently proposed by Otoni et al. [33].

We take such an approach as a starting point, but instead of theory-specific proof procedures, we use propositional definitions of the theory semantics to add clauses sufficient to prove (by RUP) the theory lemmas. The resulting proof is purely DRAT, checkable via standard DRAT checkers, with no theory-specific proof rules. Fig. 1 explains our approach in more detail; Sec. 4 dives into how we derive the added clauses; and Sec. 5 gives sample propositional theory definitions.

4 Instantiation-Based Horn Approximation

This section describes how we derive a set of clauses sufficient to make theory lemmas DRAT-checkable. Section 4.1 introduces one-sided propositional definitions and motivates the goal of a compact, Horn-clause-based definition. Section 4.2 gives a translation from an arbitrary propositional definition of a monotonic predicate to a monotonic definition, as an intermediate step toward constructing the final proof-specific, Horn definition in Section 4.3.

Fig. 2.
figure 2

Directed Graph for Running Example in Sec. 4. In the symbolic graph (left), the reachability predicate \(reach_s^t\) is a function of the edge inputs \(a,\ldots ,h\).

4.1 One-Sided Propositional Definitions and Horn Clauses

Definition 2 (Propositional Definition)

Let \(\textbf{p}\) be the positive predicate atom of predicate p over Boolean arguments A. A propositional definition of \(\textbf{p}\), denoted as \(\varSigma _{\textbf{p}}\), is a CNF formula over variables \(V \supseteq (var(\textbf{p}) \cup A)\) such that for every truth assignment \(\mathcal {M}\) to the variables in A, (1) \(\varSigma _{\textbf{p}}|_{\mathcal {M}}\) is satisfiable and (2) \(\varSigma _{\textbf{p}} \models (\mathcal {M}\Rightarrow \textbf{p})\) if and only if \(p(\mathcal {M})\) is \(\top \). The propositional definition of \(\mathbf {\bar{p}}\) is defined analogously.

For example, the Tseitin-encoding of a logic circuit that computes \(p(\mathcal {M})\) satisfies this definition. However, note that a propositional definition for \(\textbf{p}\) can be one-sided: it is not required that \(\varSigma _{\textbf{p}} \models (\mathcal {M}\Rightarrow \mathbf {\bar{p}})\) when \(p(\mathcal {M})\) is \(\bot \). That case is handled by a separate propositional definition for \(\mathbf {\bar{p}}\). We will see that this one-sidedness gives some freedom to admit more compact definitions.

Given a propositional definition \(\varSigma _{\textbf{p}}\), any theory lemma \(\mathcal {M}_A\Rightarrow \textbf{p}\) is a logical consequence of \(\varSigma _{\textbf{p}}\), but this might not be RUP checkable. One could prove \(\varSigma _{\textbf{p}} \models (\mathcal {M}_A\Rightarrow \textbf{p})\) by calling a proof-generating SAT solver on \(\varSigma _{\textbf{p}}\wedge \overline{\mathcal {M}_A\Rightarrow \textbf{p}}\), i.e., bit-blasting the specific lemma, but we will see experimentally (in Sec. 6) that this works poorly. However, if the propositional definition is limited to Horn theory (i.e., each clause has at most one positive literal), then every SMMT theory lemma can be proven by unit propagation:

Theorem 1

Let p be a positive monotonic predicate over input A, and let \(\varSigma _{\textbf{p}}^{h}\) be a propositional definition for the positive atom \(\textbf{p}\). If \(\varSigma _{\textbf{p}}^{h}\) is set of Horn clauses, then for any theory lemma \(\mathcal {M}_A\Rightarrow \textbf{p}\) where \(\mathcal {M}_A\) is a set of positive atoms from A, \(\varSigma _{\textbf{p}}^{h} \models (\mathcal {M}_A\Rightarrow \textbf{p}\)) if and only if \(\varSigma _{\textbf{p}}^{h} \vdash _{1}(\mathcal {M}_A\Rightarrow \textbf{p})\).

Proof

Suppose \(\varSigma _{\textbf{p}}^{h} \models (\mathcal {M}_A\Rightarrow \textbf{p})\), then \(\varSigma _{\textbf{p}}^{h} \wedge (\mathcal {M}_A\wedge \mathbf {\bar{p}})\) is unsatisfiable. Since \(\mathcal {M}_A\wedge \mathbf {\bar{p}}\) is equivalent to a set of unit clauses, \(\varSigma _{\textbf{p}}^{h} \wedge (\mathcal {M}_A\wedge \mathbf {\bar{p}})\) still contains only Horn clauses, so satisfiability can be determined by unit propagation.    \(\square \)

Example 1

Let \(reach_{s}^{t}\) be the reachability predicate for the directed graph shown in Fig. 2 (left). The definition schema for graph reachability in Sec. 5 yields the following set of Horn clauses: \(\varSigma _{\mathbf {reach_{s}^{t}}}^{h}:=\) (1) \(\overline{\textbf{s}}\vee \overline{\textbf{a}}\vee \textbf{v1} \), (2) \(\overline{\textbf{v1}} \vee \overline{\textbf{c}} \vee \textbf{v3}\), (3) \(\overline{\textbf{v3}} \vee \overline{\textbf{h}} \vee \textbf{t}\), (4) \(\overline{\textbf{s}} \vee \overline{\textbf{b}} \vee \textbf{v2} \), (5) \(\overline{\textbf{v3}} \vee \overline{\textbf{e}} \vee \textbf{v2} \), (6) \(\overline{\textbf{v2}} \vee \overline{\textbf{d}} \vee \textbf{v4} \), (7) \(\overline{\textbf{v4}} \vee \overline{\textbf{f}} \vee \textbf{v3} \), (8) \(\overline{\textbf{v4}} \vee \overline{\textbf{g}} \vee \textbf{t} \), (9) \(\overline{\textbf{t}} \vee \overline{\mathbf {reach_{s}^{t}}} \), (10) \(\textbf{s}\), where \(v1,\ldots ,v5\), s, and t are auxiliary variables. Any theory lemma of the form \(\mathcal {M}_A\Rightarrow \textbf{p}\), e.g., \(\overline{\textbf{a}} \vee \overline{\textbf{c}} \vee \overline{\textbf{h}} \vee \mathbf {reach_{s}^{t}}\), can be proven from \(\varSigma _{\mathbf {reach_{s}^{t}}}^{h}\) via unit propagation. Also, note that one-sidedness allows a simpler definition, despite the cycle in the graph, e.g., consider assignment \(\mathcal {M}= \{\overline{\textbf{a}},\overline{\textbf{b}},\overline{\textbf{c}},\textbf{d},\textbf{e},\textbf{f},\textbf{g},\textbf{h}\}\). Then, \(reach_s^t =\bot \), but \(\varSigma _{\mathbf {reach_{s}^{t}}}^{h} \not \models (\mathcal {M}\Rightarrow \mathbf {\overline{reach_s^t}})\).

Horn theory has limited expressiveness, but it is always sufficient to encode a propositional definition for any SMMT theory: Given a monotonic predicate atom \(\textbf{p}\), we can always encode a Horn propositional definition \(\varSigma _{\textbf{p}}^{h}\) as the conjunction of all valid theory lemmas from the theory of \(\textbf{p}\). This is because every theory lemma is restricted to the form \((\mathcal {M}_A\Rightarrow \textbf{p})\), where \(\mathcal {M}_A\) is a set of positive atoms (due to monotonicity). Hence, \(\varSigma _{\textbf{p}}^{h}\) is a set of Horn clauses. However, such a naïve encoding blows up exponentially. Instead, we will seek a compact Horn definition \(\varSigma _{\textbf{p}}^{h}\) that approximates a non-Horn propositional definition \(\varSigma _{\textbf{p}}\):

Definition 3 (Horn Upper-Bound)

Let \(\varSigma _{\textbf{p}}\) be a propositional definition of \(\textbf{p}\). A set of Horn clauses \(\varSigma _{\textbf{p}}^{h\uparrow }\) is a Horn upper-bound if \(\varSigma _{\textbf{p}} \models \varSigma _{\textbf{p}}^{h\uparrow }\).

For the strongest proving power, we want the tightest Horn upper-bound possible. Unfortunately, the least Horn upper-bound of a non-Horn theory can still contain exponentially many Horn clauses [35]. Fortunately, we don’t actually need a Horn upper-bound on the exact theory definition, but only of enough of the definition to prove the fixed set of theory lemmas that constitute the proof obligations. This motivates the next definition.

Definition 4 (Proof-Specific Horn Definition)

Given an exact definition \(\varSigma _{\textbf{p}}\) and a set of theory lemmas \(\mathbb {O}:= \{C_1, \ldots C_n \}\) from the theory of \(\textbf{p}\), a proof-specific Horn definition of \(\textbf{p}\) is a Horn upper-bound \(\varSigma _{\textbf{p}}^{h\uparrow }\) of \(\varSigma _{\textbf{p}}\) such that \(\varSigma _{\textbf{p}}^{h\uparrow } \vdash _{1}C\) for every \(C \in \mathbb {O}\).

Our goal in the next two subsections is how to derive such compact, proof-specific Horn definitions.

Example 2

Continuing Ex. 1, given a proof obligation \(\mathbb {O} \) with two theory lemmas: \(\{ \overline{\textbf{a}} \vee \overline{\textbf{c}} \vee \overline{\textbf{h}} \vee \mathbf {reach_{s}^{t}}, \; \overline{\textbf{b}} \vee \overline{\textbf{d}} \vee \overline{\textbf{g}} \vee \mathbf {reach_{s}^{t}} \}\), the subset of Horn clauses with IDs (1), (2), (3), (4), (6), (8), (9) and (10) is a proof-specific Horn definition for \(\mathbf {reach_{s}^{t}}\), which can be visualized in Fig. 2 (middle).

Given a proof obligation \(\mathbb {O}\), we can make all theory lemmas in \(\mathbb {O}\) DRAT checkable if we have exact propositional definitions for the theories and if we can dynamically transform them into compact, proof-specific Horn definitions at the time of proof checking. We simply add these additional clauses to the input of the DRAT-proof-checker.

4.2 Monotonic Definitions

The derivation of compact, proof-specific Horn definitions from arbitrary propositional definitions is a two-step process: we first show that every propositional definition for a monotonic predicate atom can be converted into a monotonic definition of linear size (this section), and then use theory lemmas in the proof obligations to create the Horn approximation of the definition (Sec. 4.3).

Definition 5 (Monotonic Definition)

Let a monotonic predicate p over input A be given. A CNF formula \(\varSigma _{\textbf{p}}^{+}\) is a monotonic definition of the positive predicate atom \(\textbf{p}\) if \(\varSigma _{\textbf{p}}^{+}\) is a propositional definition of \(\textbf{p}\), and it satisfies the following syntax restrictions: (1) \(\varSigma _{\textbf{p}}^{+}\) does not contain positive atoms from A, (2) \(\varSigma _{\textbf{p}}^{+}\) does not contain \(\mathbf {\bar{p}}\), and (3) \(\textbf{p}\) appears only in Horn clauses. The monotonic definition for \(\mathbf {\bar{p}}\) is defined analogously.

We now define the procedure, MonoT, for transforming a propositional definition into a linear-size monotonic definition:

Definition 6 (Monotonic Transformation)

Let a monotonic predicate p over input A and a propositional definition \(\varSigma _{\textbf{p}}\) for the positive predicate atom \(\textbf{p}\) be given. \(\textsc {MonoT}(\textbf{p}, \varSigma _{\textbf{p}})\) is the result of the following transformations on \(\varSigma _{\textbf{p}}\): (1) replace every occurrence of an input atom (\(\textbf{a}\) for \(a \in A\)) in \(\varSigma _{\textbf{p}}\) with a new atom \(\mathbf {a'}\) (\(\overline{\textbf{a}}\) is replaced with \(\overline{\mathbf {a'}}\)), (2) replace every occurrence of \(\textbf{p}\) and \(\mathbf {\overline{p}}\) with \(\mathbf {p'}\) and \(\mathbf {\overline{p'}}\) respectively, and (3) add the following Horn clauses: \(\textbf{a} \Rightarrow {\mathbf {a'}}\) for every \(a \in A\), and \(\mathbf {p'} \Rightarrow \textbf{p}\).

Theorem 2 (Correctness of Monotonic Transformation)

Given a monotonic predicate p over input A and the monotonic predicate atom \(\textbf{p}\), if we have any propositional definition \(\varSigma _{\textbf{p}}\) with n clauses, then \(\textsc {MonoT}(\textbf{p}, \varSigma _{\textbf{p}})\) results in a monotonic definition \(\varSigma _{\textbf{p}}^{+}\) with at most \(n + |A| + 1\) clauses.

The proof of Theorem 2 is in the extended version of this paper. The correctness relies on the fact that the predicate p is indeed monotonic, and that our propositional definitions need only be one-sided. If the monotonic definition is already in Horn theory, it can be used directly verify theory lemmas via RUP; otherwise, we proceed to Horn approximation, described next.

4.3 Instantiation-Based, Proof-Specific Horn Definition

We present the transformation from monotonic definitions into proof-specific Horn definitions. The transformation exploits the duality between predicates’ positive and negative definitions.

Lemma 1 (Duality)

Let p be a monotonic predicate over Boolean arguments A. Suppose \(\varSigma _{\textbf{p}}\) and \(\varSigma _{\bar{\textbf{p}}}\) are positive and negative propositional definitions, respectively. For every assignment \(\mathcal {M}\) to the variables in A:

  1. 1.

    \(\varSigma _{\textbf{p}} \models (\mathcal {M}\Rightarrow \textbf{p})\) if and only if \(\varSigma _{\bar{\textbf{p}}} \wedge \mathcal {M}\wedge \textbf{p}\) is satisfiable.

  2. 2.

    \(\varSigma _{\bar{\textbf{p}}} \models (\mathcal {M}\Rightarrow \bar{\textbf{p}})\) if and only if \(\varSigma _{\textbf{p}} \wedge \mathcal {M}\wedge \bar{\textbf{p}}\) is satisfiable.

The proof of Lemma 1 is in the extended version of this paper. The duality of the positive (\(\varSigma _{\textbf{p}}\)) and negative (\(\varSigma _{\bar{\textbf{p}}}\)) definitions allows us to over-approximate positive (negative) definitions by instantiating the negative (positive) definitions.

Example 3

Returning to Ex. 1 and Fig. 2, consider the assignment \(M = \{\textbf{a}, \textbf{b}, \overline{\textbf{c}}, \overline{\textbf{d}}, \textbf{e}, \textbf{f}, \textbf{g}, \textbf{h}\}\). Since s cannot reach t under this assignment, any propositional definition \(\varSigma _{\overline{\mathbf {reach_s^t}}}\) must imply \(M \Rightarrow \overline{\mathbf {reach_{s}^{t}}}\). Dually, \(\varSigma _{\mathbf {reach_{s}^{t}}}^{h} \wedge M \wedge \overline{\mathbf {reach_{s}^{t}}}\) is satisfiable, e.g., \(\{ \textbf{s}, \textbf{v1}, \textbf{v2}, \overline{\textbf{v3}}, \overline{\textbf{v4}}, \overline{\textbf{t}}\}\).

Lemma 2 (Instantiation-Based Upper-Bound)

Let a predicate p over input A and a positive definition \(\varSigma _{\textbf{p}}\) be given. For any partial assignment \(M'\) over \(var(\varSigma _{\textbf{p}}) \setminus (var(p) \cup A)\), \(\varSigma _{\textbf{p}}|_{M' \cup \mathbf {\bar{p}}} \Rightarrow \overline{\textbf{p}}\) is an over-approximation of \(\varSigma _{\overline{\textbf{p}}}\).Footnote 3

The proof of Lemma 2 (in the extended paper) relies on the duality in Lemma 1. Lemma 2 enables upper-bound construction and paves the way for constructing an instantiation-based Horn upper-bound of a monotonic definition.

Lemma 3 (Instantiation-Based Horn Upper-Bound)

Given a monotonic predicate p over input A and a positive monotonic definition \(\varSigma _{\textbf{p}}^{+}\), let X represent the set of auxiliary variables: \(var(\varSigma _{\textbf{p}}^{+}) \setminus (A \cup var(p))\). For any complete satisfying assignment \(M_{X\cup A}\) to \(\varSigma _{\textbf{p}}^{+}|_{\bar{\textbf{p}}}\), the formula \((\varSigma _{\textbf{p}}^{+}|_{\bar{\textbf{p}} \cup \mathcal {M}_X}) \Rightarrow \bar{\textbf{p}}\) serves as a Horn upper-bound for any propositional definition of \(\bar{\textbf{p}}\), where \(\mathcal {M}_X\) is a partial assignment derived from \(M_{X\cup A}\) for the auxiliary variables X.

(Proof in the extended paper.) Note that the instantiation-based Horn upper-bound of a negative predicate atom \(\bar{\textbf{p}}\) is constructed from a monotonic definition of the positive predicate atom \(\varSigma _{\textbf{p}}^{+}\), and vice-versa.

For a given theory lemma, the instantiation-based Horn upper-bound construction (Lemma 3) enables the verification of the theory lemma if we can find a sufficient “witness” \(\mathcal {M}_X\) for the instantiation. We now prove that a witness always exists for every valid theory lemma and does not exist otherwise.

Theorem 3 (Lemma-Specific Horn Upper-Bound)

Let a monotonic predicate p over input A, a monotonic definition \(\varSigma _{\textbf{p}}^{+}\) and a lemma in the form \(\mathcal {M}_A\Rightarrow \overline{\textbf{p}}\) be given. We denote X as the set of auxiliary variables: \(var(\varSigma _{\textbf{p}}^{+}) \setminus (A \cup var(p))\). The lemma \(\mathcal {M}_A\Rightarrow \overline{\textbf{p}}\) is in the theory of \(\overline{\textbf{p}}\) if and only if there exists an assignment \(\mathcal {M}_X\) on X such that: (1) \(\varSigma _{\textbf{p}}^{+}|_{\bar{\textbf{p}} \cup \mathcal {M}_X\cup \mathcal {M}_A}\) is satisfiable and (2) \((\varSigma _{\textbf{p}}^{+}|_{\bar{\textbf{p}} \cup \mathcal {M}_X} \Rightarrow \bar{\textbf{p}}) \vdash _{1}(\mathcal {M}_A\Rightarrow \bar{\textbf{p}})\).

(Proof in the extended paper.) Theorem 3 states that a lemma-specific Horn upper-bound for a theory lemma \(\mathcal {M}_A\Rightarrow \overline{\textbf{p}}\) can be constructed by instantiating the monotonic definition using a “witness” assignment \(\mathcal {M}_X\).Footnote 4 The witness could be obtained by performing SAT solving on the formula \(\varSigma _{\textbf{p}}^{+}|_{\mathcal {M}_A^+\cup \overline{\textbf{p}}}\), (where \(\mathcal {M}_A^+\) is the extension of \(\mathcal {M}_A\) by assigning unassigned input variables in A to \(\top \) (Sec. 2.2)). However, in practice, a better approach is to modify the SMMT solver to produce the witness during the derivation of theory lemmas. In Section 5, we provide examples of witnesses for commonly used monotonic predicates.

Note that the witness is not part of the trusted foundation for the proof. An incorrect witness might not support verification of a theory lemma, but if a theory lemma is verified using a specific witness \(\mathcal {M}_X\), Theorem 3 guarantees that the lemma is valid.

Example 4

Continuing the example, let a theory lemma \(L := \textbf{c} \vee \textbf{d} \vee \overline{\mathbf {reach_s^{t}}}\) be given. To derive a lemma-specific Horn upper-bound for \(\varSigma _{\overline{\mathbf {reach_s^{t}}}}\), we first obtain a witness \(\mathcal {M}_X\) by finding a satisfying assignment to the formula \(\varSigma _{\mathbf {reach_{s}^{t}}}^{h} \wedge M \wedge \overline{\mathbf {reach_{s}^{t}}}\), where \(M := \{\textbf{a}, \textbf{b}, \overline{\textbf{c}}, \overline{\textbf{d}}, \textbf{e}, \textbf{f}, \textbf{g}, \textbf{h}\}\) (by assigning the unassigned input variables in L to \(\top \)). Since M is a complete assignment to the edge variables, the graph is fully specified, and a suitable witness \(\mathcal {M}_X\) can be efficiently computed using a standard graph-reachability algorithm, to compute the reachability status of each vertex. The witness \(\mathcal {M}_X\) is \(\{ \textbf{s}, \textbf{v1}, \textbf{v2}, \overline{\textbf{v3}}, \overline{\textbf{v4}}, \overline{\textbf{t}}\}\). Following the construction in Theorem 3, the formula \(\varSigma _{\mathbf {reach_s^t}}^{h}|_{\,\overline{\mathbf {reach_s^t}} \cup \mathcal {M}_X}\) simplifies to two (unit) clauses: \(\overline{\textbf{c}}\) and \(\overline{\textbf{d}}\) (from clauses (2) and (6) in Ex. 1), which can be visualized as the cut in Fig. 2 (right). The lemma-specific Horn upper bound \(\varSigma _{\mathbf {reach_s^t}}^{h}|_{\,\overline{\mathbf {reach_s^t}} \cup \mathcal {M}_X} \Rightarrow \overline{\mathbf {reach_s^t}}\) is, therefore, \(\overline{\textbf{c}}\wedge \overline{\textbf{d}}\Rightarrow \overline{\mathbf {reach_s^t}}\), which in this example is already CNF, but more generally, we would introduce two literals to encode the implication: \(\{ \textbf{c} \vee \overline{\mathbf {l_1}}, \; \textbf{d} \vee \overline{\mathbf {l_2}}, \; \mathbf {l_1}\vee \mathbf {l_2} \vee \overline{\mathbf {reach_s^t}} \}\). The lemma-specific Horn upper-bound is dual-Horn and implies the theory lemma L by unit propagation.

From the lemma-specific Horn upper-bounds, we construct the proof-specific Horn definition by combining the lemma-specific Horn upper-bounds for all lemmas in the proof obligations.

In summary, to efficiently verify SMMT theory lemmas, we propose the following approach: (1) define the propositional definitions (in CNF) for the atoms of theory predicates; (2) transform the definitions into monotonic definitions offline; (3) during proof checking, approximate a proof-specific Horn definition (if not already Horn) from the constructed monotonic definition using theory lemmas in the proof; (4) combine the proof-specific definition together and verify the proof via RUP. The only theory-specific, trusted foundation for the proof is the definition for the theory atoms. (The extended version of this paper contains a figure to help visualize this workflow.)

Example 5

Summarizing, the positive propositional definition \(\varSigma _{\mathbf {reach_s^t}}\) in Ex. 1 is already Horn, so is sufficient for verifying via DRAT any SMMT lemmas that imply \(\mathbf {reach_s^t}\). To verify lemmas that imply \(\overline{\mathbf {reach_s^t}}\), we can compute a proof-specific definition of \(\overline{\mathbf {reach_s^t}}\) from \(\varSigma _{\mathbf {reach_s^t}}\) using Theorem 3.

Remark 1

The only trusted basis of our approach are the propositional definitions of theory atoms. For the monotonic theories in the section 5, we considered the definitions intuitively understandable, and therefore sufficiently trustworthy. But to further increase confidence, propositional definitions can be validated using techniques from hardware validation/verification, e.g., simulation to sanity-check general behavior, equivalence checking against known-good circuits, etc.

5 Example Propositional Definitions

In this section, we illustrate the monotonic definitions for the most commonly used monotonic predicates. Due to space constraints, we present only graph reachability here in detail, and only sketch bit-vector comparison and summation, and max-flow. Full definitions for those theories are in the extended version of this paper.

Graph Reachability: Given a graph \(G = (V, E)\) where V and E are sets of vertices and edges, respectively, as discussed in Sect. 2, the graph reachability theory contains the reachability predicate \(reach_{u}^{v}\) for \(u, v \in V\) over input \(e_1, e_2 \ldots e_n \in E\). For convenience, we refer to the positive edge atom for the edge from vertex i to vertex j as \(\textbf{e}_{i \rightarrow j}\). The predicate is positively monotonic for E, and the monotonic definition for the positive predicate atom \(\mathbf {reach^v_u}\) contains the clauses:

  1. 1.

    \(\overline{reach^{i}} \vee \overline{\textbf{e}_{i \rightarrow j}} \vee reach^{j}\) for every edge \(e_i^j \in E\) and the unit clause \(reach^{u}\)

  2. 2.

    \(\overline{reach^{v}} \vee \mathbf {reach^v_u}\)

The monotonic definition introduces a reachability atom \(reach^{i}\) for every \(i \in V\) and asserts the fact that u is reachable from itself. For every edge (ij), if the edge (ij) is enabled (\(\textbf{e}_{i \rightarrow j}\)) and i is reachable (\(reach^{i}\)), then j must also be reachable (\(reach^{j}\)). The predicate atom \(\mathbf {reach^v_u}\) is implied by the reachability of v (\(reach^v\)). The definition is monotonic since it only contains negative edge atoms. Moreover, the definition is already a Horn definition and can be used directly for proving theory lemmas in the theory of \(\mathbf {reach^v_u}\) without the need for transformation into a proof-specific Horn definition. The size of the definition is O(|E|).

Instead of defining the monotonic definition for the negative predicate atom \(\overline{\mathbf {reach^v_u}}\), we construct its proof-specific definition from the monotonic definition of the positive predicate atom \(\mathbf {reach^v_u}\). For each theory lemma in the proof, the witness for constructing the lemma-specific Horn upper-bound is the reachability status (\(reach^{i}\)) of every vertex \(i \in V\), which is efficiently computed in the SMMT solver using standard graph-reachability algorithms.

Bit-Vector Comparison (sketch): The positive definition is just the Tseitin encoding of a typical bit-vector comparison circuit, with some simplification due to being one-sided: For each bit position i, we introduce auxiliary variables \(ge_i\) and \(gt_i\), which indicate that the more-significant bits from this position have already determined vector \(\vec {a}\) to be \(\ge \) or > \(\vec {b}\), respectively. Simple clauses compute \(ge_{i-1}\) and \(gt_{i-1}\) from \(ge_i\) and \(gt_i\) and the bits at position \(i-1\) of \(\vec {a}\) and \(\vec {b}\). The negative definition is similar. These are both Horn, so can be used without further transformation into proof-specific Horn definitions.

Bit-Vector Summation and Comparison (sketch): These are basically Tseitin encodings of ripple-carry adders, combined with the comparison theory above — using Def. 6 to handle the fact that the the Tseitin encodings of the XOR gates in the adders are non-monotonic with respect to the input bit-vectors. The resulting propositional definitions are not Horn, so we use witnesses to construct lemma-specific Horn definitions. The witnesses come from the SMMT solver maintaining lower and upper bounds on the possible values of the bit-vectors, e.g., a witness for \(\sum {\vec {A}}\ge \sum {\vec {B}}\) are lower bounds for the vectors in \(\vec {A}\) and upper bounds for the vectors in \(\vec {B}\) such that their sums make the inequality true. (Mutadis mutandis for the negative witness.)

Max-Flow (sketch): For the positive definition (that the max-flow exceeds some value), we introduce auxiliary bit-vectors to capture the flow assigned to each edge. We use the bit-vector theories to ensure that the flows do not exceed the edge capacities, that each node’s (except the source) outgoing flows do not exceed the incoming flows (equality is unnecessary due to the one-sidedness), and that the flow to the sink exceeds the target value. For the negative definition, we exploit the famous max-flow/min-cut duality. We introduce an auxiliary variable \(incut_e\) for each edge. We use the graph reachability theory to ensure that the edges in the cut separate the source from the sink, and the bit-vector summation theory to ensure that the capacity of the cut does not exceed the target max-flow value. Both the positive and negative definitions are not Horn, so require instantiation-based upper-bounds. The witnesses are the flow values or the cuts, and are easily computed by the SMMT solver.

6 Experimental Evaluation

To evaluate our proposed method, we implemented it as shown earlier in Fig. 1 (Sec. 3). We call our implementation MonoProof (available at https://github.com/NickF0211/MonoProof).

The two basic questions of any proof-generating SAT/SMT solver are: (1) how much overhead does the support for proofs add to the solving time, and (2) how efficiently can a proof be prepared from the proof log, and verified? For the first question, we compare the runtime of unmodified MonoSAT versus the MonoSAT that we have extended to produce proof certificates. For the second question, we need a baseline of comparison. MonoProof is the first proof-generating SMMT solver, so there is no obvious comparison. However, since SMMT theories are finite-domain, and bit-blasting (i.e., adding clauses that encode the theory predicates to the problem instance and solving via a propositional SAT solver) is a standard technique for finite-domain theories, we compare against bit-blasting. Arguably, this comparison is unfair, since MonoSAT outperforms bit-blasting when solving SMMT theories [9]. Thus, as an additional baseline, we propose an obvious hybrid of SMMT and bit-blasting, which we dub Lemma-Specific Bit-Blasting (LSBB): we run MonoProof until the core theory lemmas have been extracted, benefiting from MonoSAT’s fast solving time, but then instead of using our techniques from Sec. 4, we bit-blast only the core theory lemmas.Footnote 5

We ran experiments on 3GHZ AMD Epyc 7302 CPUs with 512GB of DDR4 RAM, with a timeout of 1 hour and memory limit of 64GB. For the bit-blasting SAT solver, we use the state-of-the-art SAT solver Kissat [13]. In all cases, the proof is verified with standard DRAT-trim [37].

6.1 Benchmarks

We wish to evaluate scalability on real, industrial problems arising in practice. MonoProof has successfully generated and verified industrial UNSAT proofs for a set of hard, unsatisfiable Tiros [2, 8] queries collected in production use at AWS over a multi-week period. However, these instances are proprietary and cannot be published, making them irreproducible by others. Instead, we evaluate on two sets of benchmarks that we can publicly release (also at https://github.com/NickF0211/MonoProof):

Network Reachability Benchmarks. These are synthetic benchmarks that mimic the real-world problems solved by Tiros, without disclosing any proprietary information. Network reachability is the problem of determining whether a given pair of network resources (source and destination) can communicate. The problem is challenging because network components can intercept, transform, and optionally re-transmit packets traveling through the network (e.g., a firewall or a NAT gateway). Network components come in various types, each with their own complex behaviors and user-configurable network controls. In these benchmarks, we abstract to two types of intermediate components: simple and transforming. Simple components relay an incoming packet as long as its destination address belongs to a certain domain, expressed in terms of a network CIDR (Classless Interdomain Routing), e.g., 10.0.0.0/24. Transforming network components intercept an incoming packet and rewrite the source address and ports to match their own before re-transmitting it. The simple network components are akin to subnets, VPCs, and peering connections; transforming network components are a highly abstracted version of load balancers, NAT gateways, firewalls, etc. The SMT encoding uses the theories of bit vectors and of graph reachability. The network packets are symbolically represented using bit vectors, and the network is modeled as a symbolic graph. Network behavior is modeled as logical relations between packets and elements in the network graph. Unsatisfiability of a query corresponds to unreachability in the network: for all possible packet headers that the source could generate, and for all possible paths connecting the source to the destination, the combined effect of packet transformations and network controls placed along the path cause the packet to be dropped from the network before it reaches its destination.

We generated 24 instances in total, varying the size and structure of the randomly generated network. Graph sizes ranged from 1513 to 15524 (average 5485) symbolic edges.

Escape Routing Benchmarks. Escape routing is the problem of routing all the signals from a component with extremely densely packed I/O connections (e.g., the solder bumps on a Ball-Grid Array (BGA)) to the periphery of the component, where other routing techniques can be used. For a single-layer printed circuit board (PCB), escape routing is optimally solvable via max-flow, but real chips typically require multiple layers. The multi-layer problem is difficult because the vias (connections between layers) are wider than the wires on a layer, disrupting what routes are possible on that layer. Bayless et al. [11] proposed a state-of-the-art solution using SMMT: max-flow predicates determine routability for each layer on symbolic graphs, whose edges are enabled/disabled by logical constraints capturing the design rules for vias.

In [11], 24 commercial BGAs were analyzed under two different via technologies and different numbers of layers. For our benchmark set, we select all configurations where the provable minimum number of layers were reported. This results in 24 unsatisfiable SMMT problems instances (routing with one fewer layer than the minimum), which exercise the bit-vector and max-flow theories. Graph sizes ranged from 193994 to 3084986 (average 717705) symbolic edges.

6.2 Results

Returning to the two questions for our evaluation:

Fig. 3.
figure 3

Cactus Plots for Solving (left) and Proof Preparation &Checking (right). Each point is the runtime for one instance, so the plot shows the number of instances (x-axis) that ran in less than any time bound (y-axis). BB denotes standard bit-blasting; LSBB, lemma-specific bit-blasting; and MonoProof is our new method. The left graph shows that MonoProof (and LSBB, which uses MonoProof’s solver) is vastly faster than bit-blasting for solving the instances. The right graph shows that MonoProof is also vastly faster than bit-blasting for proving the result; LSBB timed-out on all proofs.

  1. 1.

    The solver overhead of our proof certificate generation is minimal. On the network reachability benchmarks, the geometric mean (GM) runtime overhead was 14.10% (worst case 28.8%). On the escape routing benchmarks, the GM runtime overhead was only 1.11% (the worst case 5.71%). (The lower overhead is because MonoSAT spent more time learning theory lemmas vs. recording them in the proof.) The overall GM runtime overhead across all benchmarks was 7.41%. These overhead figures are comparable to state-of-the-art, proof-generating SAT solvers, which is not surprising, since our proof certificates are essentially the same as a DRAT proof certificate in SAT. This compares favorably with the solver overhead of heavier-weight, richer, and more expressive SMT proof certificates like LFSC [34].

  2. 2.

    MonoProof’s time to prepare and check a proof of unsatisfiability is markedly faster than standard bit-blasting or lemma-specific bit-blasting. Fig. 3 summarizes our results. (A full table is in the extended version of this paper.) The left graph shows solving times (with proof logging). Since the proof-logging overhead is so low for both bit-blasting (Kissat generating DRAT) and MonoProof, these results are consistent with prior work showing the superiority of the SMMT approach for solving [9]. Note that bit-blasting (BB) solved all 24 network reachability instances, but failed to solve any of the 24 escape routing instances in the 1hr timeout. Lemma-specific bit-blasting (LSBB) and MonoProof share the same solving and proof-logging steps. The right graph shows proof-checking times (including BackwardCheck and proof-specific Horn upper-bound construction for MonoProof). Here, BB could proof-check only 11/24 reachability instances that it had solved. Restricting to only the 11 instances that BB proof-checked, MonoProof was at least \(3.7\times \) and geometric mean (GM) \(10.2\times \) faster. LSBB timed out on all 48 instances. Summarizing, MonoProof solved and proved all 48 instances, whereas BB managed only 11 instances, and LSBB failed to prove any.

The above results were with our modified BackwardCheck enabled (drat-trim-theory in Fig. 1). Interestingly, with BackwardCheck disabled, MonoProof ran even faster on 37/48 benchmarks (min speedup \(1.03\times \), max \(6.6\times \), GM \(1.7\times \)). However, enabling BackwardCheck ran faster in 10/48 cases (min speedup \(1.02\times \), max \(7.9\times \), GM \(1.6\times \)), and proof-checked one additional instance (69 sec. vs. 1hr timeout). The modified BackwardCheck is a useful option to have available.

7 Conclusion

We have introduced the first efficient proof-generating method for SMMT. Our approach uses propositional definitions of the theory semantics and derives compact, proof-specific Horn-approximations sufficient to verify the theory lemmas via RUP. The resulting pure DRAT proofs are checkable via well-established (and even machine verified) tools. We give definitions for the most common SMMT theories, and experimental results on industrial-scale problems demonstrate that the solving overhead is minimal, and the proof preparation and checking times are vastly faster than the alternative of bit-blasting.

The immediate line of future work is to support additional finite domain monotonic theories, such as richer properties on pseudo-boolean reasoning. We also aim to apply our approach to support monotonic theories beyond finite domains. In addition, we plan to extend our proof support to emerging proof format such as LRAT [18] and FRAT [3] that enable faster proof checking.