Abstract
We present an algorithm and tool to convert derivations from the powerful recently proposed PR proof system into the widely used DRAT proof system. The PR proof system allows short proofs without new variables for some hard problems, while the DRAT proof system is supported by toptier SAT solvers. Moreover, there exist efficient, formally verified checkers of DRAT proofs. Thus our tool can be used to validate PR proofs using these verified checkers. Our simulation algorithm uses only one new Boolean variable and the size increase is at most quadratic in the size of the propositional formula and the PR proof. The approach is evaluated on short PR proofs of hard problems, including the wellknown pigeonhole and Tseitin formulas. Applying our tool to PR proofs of pigeonhole formulas results in short DRAT proofs, linear in size with respect to the size of the input formula, which have been certified by a formally verified proof checker.
Supported by the National Science Foundation (NSF) under grant CCF1526760 and by the Austrian Science Fund (FWF) under project S11409N23 (RiSE).
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
1 Introduction
Satisfiability (SAT) solvers are powerful tools for many applications in formal methods and artificial intelligence [3, 9]. Arguably the most effective new techniques in recent years are based on inprocessing [21, 25]: Interleaving preprocessing techniques and conflictdriven clause learning (CDCL) [26]. Several powerful inprocessing techniques, such as symmetry breaking [1, 6] and blocked clause addition [23], do not preserve logical equivalence and cannot be expressed compactly using classical resolution proofs [30]. The \(\mathsf {RAT}\) proof system [14] was designed to express such techniques succinctly and facilitate efficient proof validation. All toptier SAT solvers support proof logging in the \(\mathsf {DRAT}\) proof system [12], which extends the \(\mathsf {RAT}\) proof system with clause deletion.
More recently a groundbreaking paper [8] presented at TACAS’17 showed how to efficiently certify huge propositional proofs of unsatisfiability by proof checkers, which are formally verified by theorem provers, such as ACL2 [7], Coq [7, 8], and Isabelle/HOL [24]. These developments are clearly a breakthrough in SAT solving. They allow us to have the same trust in the correctness of the results produced by a highly tuned stateoftheart SAT solver as into those claims deduced with proof producing theorem provers. We can now use SAT solvers as part of such fully trusted proof generating systems.
On the other hand, with even more powerful proof systems we can produce even smaller proofs. The goal in increasing the power of proof systems is to cover additional not yet covered but existing reasoning techniques compactly, e.g., algebraic reasoning, but also to provide a framework for investigating new inprocessing techniques. If proofs are required, then this is a necessary condition for solving certain formulas faster. However it makes proof checking more challenging. The recently proposed \(\mathsf {PR}\) proof system [17] (best paper at CADE’17) is such a generalization of the \(\mathsf {RAT}\) proof system, actually an instance of the most general way of defining a clausal proof system based on clause redundancy.
There are short \(\mathsf {PR}\) proofs without new variables for some hard formulas [17]. Some of them can be found automatically [18]. The \(\mathsf {PR}\) proof system can therefore reveal new powerful inprocessing techniques. Short proofs for hard formulas in the \(\mathsf {RAT}\) proof system likely require many new variables, making it difficult to find them automatically. The question whether \(\mathsf {PR}\) proofs can efficiently be converted into proofs in the \(\mathsf {RAT}\) and \(\mathsf {DRAT}\) proof systems has been open. In this paper, we give a positive answer and present a conversion algorithm that in the worst case results in a quadratic blowup in size. Surprisingly only a single new Boolean variable is required to convert \(\mathsf {PR}\) proofs into \(\mathsf {DRAT}\) proofs.
At this point there exists only an unverified checker to validate \(\mathsf {PR}\) proofs, written in C. In order to increase the trust in the correctness of \(\mathsf {PR}\) proofs, we implemented a tool, called PR2DRAT, to convert \(\mathsf {PR}\) proofs into \(\mathsf {DRAT}\) proofs, which in turn can be validated using verified proof checkers. Thanks to various optimizations, the size increase during conversion is rather modest on available \(\mathsf {PR}\) proofs, thereby making this a useful certification approach in practice.
Contributions

We show that the \(\mathsf {RAT}\) and \(\mathsf {DRAT}\) proof systems are as strong as the recently introduced \(\mathsf {PR}\) proof system by giving an efficient simulation algorithm of \(\mathsf {PR}\) proofs by \(\mathsf {RAT}\) and \(\mathsf {DRAT}\) proofs.

We implemented a proof conversion tool including various optimizations, which allow us to obtain linear size \(\mathsf {DRAT}\) proofs from \(\mathsf {PR}\) proofs for the wellknown pigeonhole formulas. These new \(\mathsf {DRAT}\) proofs are significantly smaller than the most compact known \(\mathsf {DRAT}\) proof for these formulas.

We validated short \(\mathsf {PR}\) proofs of hard formulas by converting them into \(\mathsf {DRAT}\) proofs and certified these using a formally verified proof checker.
Structure
After preliminaries in Sect. 2 we elaborate on clausal proof systems in Sect. 3 also taking the idea of deletion steps into account. Then Sect. 4 describes and analyzes our simulation algorithm. In Sect. 5 we present how to optimize our new algorithm for special cases followed by alternative simulation algorithms in Sect. 6. Experiments are presented in Sect. 7 before we conclude with Sect. 8.
2 Preliminaries
Below we present the most important background concepts related to this paper.
Propositional Logic. Propositional formulas in conjunctive normal form (CNF) are the focus of this paper. A literal is either a variable x (a positive literal) or the negation \(\overline{x}\) of a variable x (a negative literal). The complementary literal \(\bar{l}\) of a literal l is defined as \(\bar{l} = \overline{x}\) if \(l = x\) and \(\bar{l} = x\) if \(l = \overline{x}\). A clause C is a disjunction of literals. A formula F is a conjunction of clauses. For a literal, clause, or formula \(\phi \), \( var (\phi )\) denotes the variables in \(\phi \). We treat \( var (\phi )\) as a variable if \(\phi \) is a literal, and as a set of variables otherwise.
Satisfiability. An assignment is a (partial) function from a set of variables to the truth values 1 (true) and 0 (false). An assignment is total w.r.t. a formula if it assigns a truth value to all variables occurring in the formula. We extend a given \(\alpha \) to an assignment over literals, clauses and formulas in the natural way. Let \(\phi \) be either a literal, clause or formula \(\phi \). Then \(\phi \) is satisfied if \(\alpha (\phi ) = 1\) and falsified if \(\alpha (\phi ) = 0\). Otherwise \(\phi \) is unassigned. In particular, we have \(\overline{x}\) is satisfied if x is falsified by \(\alpha \) and vice versa. A clause is satisfied by \(\alpha \) if it contains a literal that is satisfied by \(\alpha \) and falsified if all its literals are falsified. Finally a formula is satisfied by \(\alpha \) if all its clauses are satisfied by \(\alpha \). We often denote assignments by sequences of literals they satisfy. For instance, \(x\overline{y}\) denotes the assignment that assigns \(1\) to x and \(0\) to y. For an assignment \(\alpha \), \( var (\alpha )\) denotes the variables assigned by \(\alpha \). Further, \(\alpha _l\) denotes the assignment obtained from \(\alpha \) by flipping the truth value of literal l assuming it is assigned. A formula is satisfiable if there exists an assignment that satisfies it and unsatisfiable otherwise.
Formula Simplification. We denote the empty clause by \(\bot \) and by \(\top \) the valid and always satisfied clause. A clause is a tautology if it contains a literal l and its negation \(\bar{l}\). Given assignment \(\alpha \) and clause C, we define if \(\alpha \) satisfies C; otherwise, denotes the result of removing from C all the literals falsified by \(\alpha \). For a formula F, we define . We say that an assignment \(\alpha \) touches a clause C if \( var (\alpha ) \cap var (C) \ne \emptyset \). A unit clause is a clause with only one literal. The result of applying the unit clause rule to a formula F is the formula where (l) is a unit clause in F. The iterated application of the unit clause rule to a formula, until no unit clauses are left, is called unit propagation. If unit propagation yields the empty clause \(\bot \), we say that it derived a conflict. Given two clauses \((l \vee C)\) and \((\bar{l} \vee D)\) their resolvent is \(C \vee D\). If further \(D \subseteq C\), selfsubsuming literal elimination (SSLE) allows removing l from \((l \vee C)\). Notice that C is the resolvent of \((l \vee C)\) and \((\bar{l} \vee D)\). So an SSLE step can be seen as two operations, learning the resolvent C followed by the removal of \((l \vee C)\), which is subsumed by C. The reverse of SSLE is selfsubsuming literal addition (SSLA), which can add a literal l to a clause C in the presence of a clause \((\bar{l} \vee D)\) with \(D \subseteq C\). The notion of SSLE first appeared in [10] and is a special case of asymmetric literal elimination (ALE), which in turn is the inverse of asymmetric literal addition (ALA) [16].
Clause C is blocked on literal \(l \in C\) w.r.t. a formula F, if all resolvents of C and \(D \in F\) with \(\bar{l} \in D\) are tautologies. If a clause \(C \in F\) is blocked w.r.t. F, C can be removed from F while preserving satisfiability. If a clause \(C \notin F\) is blocked w.r.t. F, then C can be added to F while preserving satisfiability.
Formula Relations. Two formulas are logically equivalent if they are satisfied by the same assignments. Two formulas are satisfiability equivalent if they are either both satisfiable or both unsatisfiable. Given two formulas F and \(F'\), we denote by \(F \vDash F'\) that F implies \(F'\), i.e., all assignments satisfying F also satisfy \(F'\). Furthermore, by \(F \vdash _1 F'\) we denote that for every clause \((l_1 \vee \dots \vee l_n) \in F'\), unit propagation on \(F \wedge (\bar{l}_1) \wedge \dots \wedge (\bar{l}_n)\) derives a conflict. If \(F \vdash _1 F'\), we say that F implies \(F'\) through unit propagation. For example, \((x) \wedge (y) \vdash _1 (x \vee z) \wedge (y)\), since unit propagation of the unit clauses \((\overline{x})\) and \((\overline{z})\) derives a conflict with (x), and unit propagation of \((\overline{y})\) derives a conflict with (y).
3 Clausal Proof Systems
In this section, we introduce a formal notion of clause redundancy and demonstrate how it provides the basis for clausal proof systems. We start by introducing clause redundancy [22]:
Definition 1
A clause C is redundant w.r.t. a formula F if F and \(F \cup \{C\}\) are satisfiability equivalent.
For instance, the clause \(C = (x \vee y)\) is redundant w.r.t. \(F = (\overline{x} \vee \overline{y})\) since F and \(F \cup \{C\}\) are satisfiability equivalent (although they are not logically equivalent). Since this notion of redundancy allows us to add redundant clauses to a formula without affecting its satisfiability, it gives rise to clausal proof systems.
Definition 2
For \(n \in \mathbb {N}\) a derivation of a formula \(F_n\) from a formula \(F_0\) is a sequence of n triples \((d_1, C_1,\omega _1),\dots , (d_n, C_n,\omega _n)\), where each clause \(C_i\) for \(1 \le i \le n\) is redundant w.r.t. \(F_{i1} \setminus \{C_i\}\) with \(F_{i} = F_{i1} \cup \{C_i\}\) if \(d_i = 0\) and \(F_{i} = F_{i1} \setminus \{C_i\}\) if \(d_i = 1\). The assignment \(\omega _i\) acts as (arbitrary) witness of the redundancy of \(C_i\) w.r.t. \(F_{i1}\) and we call the number n of steps also the length of the derivation. A derivation is a refutation of \(F_0\) if \(d_n = 0\) and \(C_n = \bot \). A derivation is a proof of satisfaction of \(F_0\) if \(F_n\) equals the empty formula.
If there exists such a derivation of a formula \(F'\) from a formula F, then F and \(F'\) are satisfiability equivalent. Further a refutation of a formula F, as defined above, obviously certifies the unsatisfiability of F since any \(F'\) containing the empty clause is unsatisfiable. Note that at this point these \(\omega _i\) are still placeholders used in refinements, i.e., in the \(\mathsf {RAT}\) and \(\mathsf {PR}\) proof systems defined below, where these \(\omega _i\) are witnesses for the redundancy of \(C_i\) w.r.t. \(F_{i1}\). In these specialized proof systems this redundancy can be checked efficiently, i.e., in polynomial time w.r.t. the size of \(C_i\), \(F_{i1}\) and \(\omega _i\).
3.1 The \(\mathsf {RAT}\) Proof System
The \(\mathsf {RAT}\) proof system allows the addition of a redundant clause, which is a socalled resolution asymmetric tautology [21] (\(\mathsf {RAT}\), defined below). It can be efficiently checked whether a clause is a \(\mathsf {RAT}\). The following definition of \(\mathsf {RAT}\) is equivalent to the original one in [21] based on resolvents using results from [17].
Definition 3
Let F be a formula, C a clause, and \(\alpha \) the smallest assignment that falsifies C. Then, C is a resolution asymmetric tautology (\(\mathsf {RAT}\)) with respect to F if there exists a literal \(l \in C\) such that . We say that C is a \(\mathsf {RAT}\) on l w.r.t. F. The empty clause \(\bot \) is a \(\mathsf {RAT}\) w.r.t. F iff \(F \vdash _1 \bot \).
Informally, means that is at least as satisfiable compared to . We know that \(\alpha _l\) satisfies C as \(l \in C\), thus . Hence, if F has a satisfying assignment \(\beta \) that falsifies C, which necessarily is an extension of \(\alpha \), then it also satisfies , and thus there exists a satisfying assignment of F that satisfies C, obtained from \(\beta \) by flipping the assigned value of l.
Example 1
Let \(F = (x \vee y) \wedge (\overline{x} \vee y) \wedge (\overline{x} \vee z)\) and \(C = (x \vee \overline{z})\). Then, \(\alpha = \overline{x}z\) is the smallest assignment that falsifies C. Observe that C is a \(\mathsf {RAT}\) clause on literal x w.r.t. F. First, \(\alpha _x = xz\). Now, consider and . Clearly, unit propagation on derives a conflict, thus . \(\square \)
In a \(\mathsf {RAT}\) derivation \((d_1, C_1,\omega _1),\dots , (d_n, C_n,\omega _n)\) all \(d_i\)’s are zero (additions). Let \(\alpha _i\) denote the smallest assignment that falsifies \(C_i\) and let \(l_i \in C_i\) be a literal on which \(C_i\) is a \(\mathsf {RAT}\) on \(l_i\) w.r.t \(F_{i1}\). Each witness \(\omega _i\) in a \(\mathsf {RAT}\) derivation equals \((\alpha _i)_{l_i}\), which is obtained from \(\alpha _i\) by flipping the value of \(l_i\).
3.2 The \(\mathsf {PR}\) Proof System
As discussed, addition of \(\mathsf {PR}\) clauses (short for propagationredundant clauses) to a formula can lead to short proofs for hard formulas without the introduction of new variables. Although \(\mathsf {PR}\) as well as \(\mathsf {RAT}\) clauses are not necessarily implied by the formula, their addition preserves satisfiability [17]. The intuitive reason for this is that the addition of a \(\mathsf {PR}\) clause prunes the search space of possible assignments in such a way that there still remain assignments under which the formula is as satisfiable as under the pruned assignments.
Definition 4
Let F be a formula, C a nonempty clause, and \(\alpha \) the smallest assignment that falsifies C. Then, C is propagation redundant (\(\mathsf {PR}\)) with respect to F if there exists an assignment \(\omega \) which satisfies C, such that .
The clause C can be seen as a constraint that “prunes” from the search space all assignments that extend \(\alpha \). Note again, that in our setting assignments are in general partial functions. Since implies , every assignment that satisfies also satisfies , meaning that F is at least as satisfiable under \(\omega \) as it is under \(\alpha \). Moreover, since \(\omega \) satisfies C, it must disagree with \(\alpha \) on at least one variable. We refer to \(\omega \) as the witness, since it witnesses the propagationredundancy of the clause. Consider the following example from [17].
Example 2
Let \(F = (x \vee y) \wedge (\overline{x} \vee y) \wedge (\overline{x} \vee z)\), \(C = (x)\), and let \(\omega = xz\) be an assignment. Then, \(\alpha = \overline{x}\) is the smallest assignment that falsifies C. Now, consider and . Clearly, unit propagation on derives a conflict. Thus, and C is propagation redundant w.r.t. F. Notice that C is not \(\mathsf {RAT}\) w.r.t F as . \(\square \)
Most known types of redundant clauses are \(\mathsf {PR}\) clauses [17], including blocked clauses [23], setblocked clauses [22], resolution asymmetric tautologies, etc.
3.3 The Power of Deletion
The clausal proof system \(\mathsf {DRAT}\) [29] is the defacto standard for proofs of unsatisfiability (refutations) in practice. It extends \(\mathsf {RAT}\) by allowing the deletion of clauses. The main purpose of clause deletion is to reduce computation cost to validate proofs of unsatisfiability. Note, that SAT solvers not only learn clauses, but also aggressively delete clauses to speed up reasoning. Integrating deletion information in proofs is crucial to speed up proof checking.
In principle, while deleted clause information has to be taken into account to update the formula after a deletion step, one does not need to check the validity of clause deletion steps in order to refute a propositional formula. Simply removing deleted clauses during proof checking trivially preserves unsatisfiability.
Proofs of satisfiability only exist in proof systems that allow and enforce valid deletion steps, because they are required to reduce a formula to the empty formula. In case of propositional formulas, the notion of proofs of satisfiability is probably not useful as a satisfying assignment can be used to certify satisfiability. However, for richer logics, such as quantified Boolean formulas, the proof of satisfiability can be exponentially smaller compared to alternatives [19, 20].
4 Conversion Algorithm
This section presents our main algorithm, which describes how to convert a \(\mathsf {PR}\) derivation \((0,C_1,\omega _1), \dots , (0,C_n, \omega _n)\) of a formula \(F_n\) from a formula \(F_0\) into a \(\mathsf {DRAT}\) derivation \((d_1,D_1,\omega '_1)\), \(\dots \), \((d_m,D_m,\omega '_m)\) of \(G_m = F_n\) from \(G_0 = F_0\). Each \(\mathsf {PR}\) proof step adds a clause to the formula. Let \(G_0\) be a copy of \(F_0\) and \(F_{i} := F_{i1} \wedge C_i\) for \(1 \le i \le n\). Each proof step in a \(\mathsf {DRAT}\) proof either deletes or adds a clause depending on whether \(d_i\) is 1 or 0 (respectively). For \(1 \le j \le m\) we either have \(G_{i} := G_{i1} \setminus \{D_i\}\) if \(d_i\) is 1 or \(G_{i} := G_{i1} \wedge D_i\) if \(d_i\) is 0.
Each single \(\mathsf {PR}\) derivation step \((0,C_i,\omega _i)\) is also a \(\mathsf {PR}\) derivation of \(F_i\) from \(F_{i1}\) and our conversion algorithm simply translates each such \(\mathsf {PR}\) derivation step separately into a \(\mathsf {DRAT}\) derivation of \(F_i\) from \(F_{i1}\). The conversion of the whole \(\mathsf {PR}\) derivation is then obtained as concatenation of these individual \(\mathsf {DRAT}\) derivations, which gives a \(\mathsf {DRAT}\) derivation of \(F_n\) from \(F_0\). We will first offer an informal topdown description of converting a single \(\mathsf {PR}\) derivation step into a sequence of \(\mathsf {DRAT}\) steps.
4.1 TopDown
Consider a formula F and a clause C which has \(\mathsf {PR}\) w.r.t. F with witness \(\omega \), i.e., a single \(\mathsf {PR}\) derivation step. The central question addressed in this paper is how to construct a \(\mathsf {DRAT}\) derivation of \(F \wedge C\) from F. The constructed \(\mathsf {DRAT}\) derivation \( (d_1, C_1, \omega _1) , \dots , (d_q, C_q, \omega _q) , (d_{q+1}, C_{q+1}, \omega _{q+1}) , \dots ,(d_p, C_p, \omega _p) \) of \(F \wedge C\) from F consists of three parts. It also requires to introduce a (new) Boolean variable x that does not occur in F.

1.
Construct a \(\mathsf {DRAT}\) derivation \((d_1, C_1, \omega _1), \dots , (d_q, C_q, \omega _q)\) of \(F'\) from F where

a.
the clause \((x \vee C)\) is a \(\mathsf {RAT}\) on x w.r.t. \(F'\) and

b.
there exists a \(\mathsf {DRAT}\) derivation from \(F' \wedge (x \vee C)\) to \(F \wedge C\).

a.

2.
In step \(q+1\), clause \(C_{q+1} = (x \vee C)\) is added to \(F'\).

3.
The steps after step \(q+1\) transform \(F' \wedge (x \vee C)\) into \(F \wedge C\).
Notice that \((x \vee C)\) is blocked w.r.t. F and could therefore be added to F as a first step. However, it is very hard to eliminate literal x from \(F \wedge (x \vee C)\). Instead, we transform F into \(F'\), before the addition and reverse the transformation afterwards. Below we describe the details of our simulation algorithm in five phases of which phase (I) and (II) correspond to the transformation (part 1.) and phase (IV) and (V) corresponds to the reverse transformation (part 3.).
4.2 Five Phases
We will show a transformation of how \(F_{i+1}\) is derived from \(F_i\) using \(\mathsf {PR}\) step \((0,C_{i+1},\omega _{i+1})\) into a sequence of p \(\mathsf {DRAT}\) proof steps from \(G_j\) to \(G_{j+p}\) such that \(G_j = F_i\) and \(G_{j+p} = F_{i+1}\). In the description below, F refers to \(F_i\), C refers to \(C_{i+1}\), and \(\omega \) refers to \(\omega _{i+1}\). Further let x be a new Boolean variable, i.e., x does not occur in F. We can assume that \( var (C) \subseteq var (F)\). Otherwise there exists a literal \(l \in C\) and \( var (l) \notin var (F)\). Thus C is blocked on l w.r.t. F and can be added to F using a single \(\mathsf {RAT}\) step.

(I)
Add shortened copies of clauses that are reduced, but not satisfied by \(\omega \).
The first phase of the conversion algorithm extends F by adding the clauses \((\overline{x} \vee D)\) with . The literal x does not occur in F. All clauses \((\overline{x} \vee D)\) are blocked on \(\overline{x}\) w.r.t. F as no resolution on x is possible. We denote with \({G^{\tiny \,(\text {I})}}\) the formula F after these clause additions.

(II)
Weaken the clauses that are reduced and satisfied by \(\omega \).
A clause \(E \in F\) is called involved if it is both reduced by \(\omega \) as well as satisfied by \(\omega \). The second phase weakens all involved clauses by replacing E with \((x \vee E)\) as follows. First, we add the implication \(x \Rightarrow \omega \), or in clauses \((\overline{x} \vee l)\) with \(l \in \omega \). These clauses are blocked because \({G^{\tiny \,(\text {I})}}\) does not contain clauses with literal x. Second, we weaken the involved clauses using selfsubsuming literal addition (SSLA), since they all contain at least one \(l\in \omega \). Third, we remove the implication \(x \Rightarrow \omega \). When this implication was added, the clauses \((\overline{x} \vee l)\) with \(l \in \omega \) were blocked on x. Now we can remove them, because they have \(\mathsf {RAT}\) on l, which can be seen as follows. Consider a clause containing \(\bar{l}\). If it is a weakened clauses \((x \vee E)\) of E where \(E \in F\) is satisfied by \(\omega \), then x occurs in opposite phase and the resolvent is a tautology (same condition as for blocked clauses). Otherwise the resolvent on l of \((\overline{x} \vee l)\) with the clause containing \(\bar{l}\) is subsumed by a clause \((\overline{x} \vee D)\) with added in first step above. The resulting formula, where all involved clauses in \({G^{\tiny \,(\text {I})}}\) are weakened, is denoted by \({G^{\tiny \,(\text {II})}}\).

(III)
Add the weakened \(\mathsf {PR}\) clause.
Add the clause \((x \vee C)\) to \({G^{\tiny \,(\text {II})}}\), resulting in \({G^{\tiny \,(\text {III})}}\). The key observation related to this phase is that \((x \vee C)\) has \(\mathsf {RAT}\) on x w.r.t. \({G^{\tiny \,(\text {II})}}\): The only clauses in \({G^{\tiny \,(\text {II})}}\) that contain literal \(\overline{x}\) are the ones that were added in the first phase. We need to show that \({G^{\tiny \,(\text {II})}}\) implies every clause \((x \vee C \vee D)\) with by unit propagation. Let \(\alpha \) be the smallest assignment that falsifies C. Since C has \(\mathsf {PR}\) w.r.t. F using witness \(\omega \), we know that with . This is equivalent to \(F \vdash _1 (C \vee D)\) with . Furthermore . Hence, or equivalently, \({G^{\tiny \,(\text {II})}}\vdash _1 (x \vee C \vee D)\).

(IV)
Strengthen all weakened clauses.
The fourth phase removes all occurrences of the literal x from clauses in \({G^{\tiny \,(\text {III})}}\), thereby reversing the second phase and strengthening \((x \vee C)\) to C. This phase consists of three parts. First, we reintroduce the implication \(x \Rightarrow \omega \), or in clauses \((\overline{x} \vee l)\) with \(l \in \omega \). These clauses have \(\mathsf {RAT}\) on l w.r.t. \({G^{\tiny \,(\text {III})}}\) by the same reasoning used to remove them in the second phase above and in case \((\overline{x} \vee l)\) can be resolved on l with the only clause \((x \vee C)\) added in the third phase, thus \(\bar{l} \in C\), the resolvent is a tautology (contains x and \(\overline{x}\)). Afterwards, we strengthen all clauses \((x \vee E) \in {G^{\tiny \,(\text {III})}}\) to E as follows. Note that this also strengthens clause \((x \vee C)\) to C. Observe that all clauses \((x \vee E) \in {G^{\tiny \,(\text {III})}}\) including \((x \vee C)\) are satisfied by \(\omega \) and therefore there exists a clause \((\overline{x} \vee l)\) with \(l \in E\). Selfsubsuming literal elimination (SSLE) can now eliminate all literals x. Finally, the implication \(x \Rightarrow \omega \) is no longer required. The clauses \((\overline{x} \vee l)\) with \(l \in \omega \) added twice already can be removed again since literal \(\overline{x}\) has become pure due to the strengthening of all clauses containing literal x. The resulting formula obtained from \({G^{\tiny \,(\text {III})}}\) by removing all occurrences of literal x is denoted by \({G^{\tiny \,(\text {IV})}}\).

(V)
Remove the shortened copies.
The fifth phase reverses the first phase, and actually uses the same argument as the fourth phase. All clauses in \({G^{\tiny \,(\text {III})}}\) that contained a literal x were strengthened by removing these literals in phase four. As a consequence, the literal \(\overline{x}\) is (still) pure in \({G^{\tiny \,(\text {IV})}}\). The only clauses that still contain literal \(\overline{x}\) are exactly the clauses that have been added in the first phase. Since they are all blocked on \(\overline{x}\) w.r.t. \({G^{\tiny \,(\text {IV})}}\), they can be eliminated, while preserving satisfiability. After removing these clauses we obtain \({G^{\tiny \,(\text {V})}}\) which equals \(F \wedge C\).
4.3 Complexity
In this section we analyze the worst case complexity of converting a \(\mathsf {PR}\) derivation \((0,C_1,\omega _1), \dots , (0,C_n, \omega _n)\) of a formula \(F_n\) from a formula \(F_0\) into a \(\mathsf {DRAT}\) derivation \((d_1,D_1,\omega '_1), \dots , (d_1,D_m, \omega '_m)\) of \(G_m = F_n\) from \(G_0 = F_0\) using the presented simulation algorithm. The number of \(\mathsf {DRAT}\) steps that are required to simulate a single \(\mathsf {PR}\) addition step depends on the size of the formula. Let \(N = F_n\) be the number of clauses in the last \(F_n\) and \(V= var (F_n)\) the number of its variables. Since a \(\mathsf {PR}\) derivation does not remove clauses, we have \(F_i = F_{i1} + 1\) and \( var (F_i) \ge  var (F_{i1})\). Therefore for \(i \in \{1..n\}\), \(F_i \le N\) and \( var (F_i) \le V\). In the analysis we ignore clause deletion, since the number of clause deletions is bounded by the number of added clauses.
In phase (I) of the conversion algorithm, copies of clauses that are reduced but not satisfied by \(\omega _i\) are added, while phase (II) clauses are weakened which are reduced and satisfied by \(\omega _i\). Since a clause is either satisfied, not satisfied, or untouched by \(\omega _i\), the sum of the number of copies and weakened clauses is at most \(F_i \le N\). Also the implication \(x \Rightarrow \omega _i\) is added in phase (II), meaning at most \( var (\omega _i) \le  var (F_i) \le V\) clause addition steps. Phase (III) adds a single clause. Phase (IV) adds again the implication \(x \Rightarrow \omega _i\) (at most V steps) and strengthens all weakened clauses (at most N steps). Phase (V) only deletes clauses. Thus the total number of clause additions for all phases in the conversion of a single \(\mathsf {PR}\) step is bounded by \(2V + 2N + 1\).
There are \(n \le N\) additions in the \(\mathsf {PR}\) proof and for each addition we apply the conversion algorithm. Hence the total number of clause addition steps in the \(\mathsf {DRAT}\) derivation is at most \(2NV + 2N^2 + N\). Since \(V \le N\) for any interesting \(\mathsf {PR}\) derivation, the number of steps in the resulting \(\mathsf {DRAT}\) derivation is in \(\mathcal {O}(N^2)\).
5 Optimizations
The simulation algorithm described in the prior section was designed to result in compact \(\mathsf {DRAT}\) derivations using a single new variable, while focussing on converting any \(\mathsf {PR}\) derivation into a \(\mathsf {DRAT}\) derivation. The algorithm can be further optimized to reduce the size of the resulting \(\mathsf {DRAT}\) derivations.
5.1 Refutations
In practice, most \(\mathsf {PR}\) derivations are refutations, i.e., they include adding the empty clause. When converting \(\mathsf {PR}\) refutations, one can ignore the justification of any weakening steps as such steps trivially preserve unsatisfiability. The only weakening steps in the simulation algorithm are performed in phase (II). The purpose of the addition of the implication \(x \Rightarrow \omega \) in phase (II) is to allow the weakening via selfsubsuming literal addition (SSLA). This justification is no longer required for \(\mathsf {PR}\) refutations. Without the addition of \(x \Rightarrow \omega \), one can also discard its removal. So both the first and third part of phase (II) can be omitted.
5.2 Witness Minimization
In some situations, only a subset of the involved clauses needs to be weakened (phase (II)) and later strengthened (phase (IV)). Weakening of involved clauses is required to make sure that the clauses \((\overline{x} \vee l)\) with \(l \in \omega \) are \(\mathsf {RAT}\) on l w.r.t. \({G^{\tiny \,(\text {III})}}\) in phase (IV) of the simulation algorithm. However, some of the clauses \((\overline{x} \vee l)\) may be unit implied by others (and do not require to be a \(\mathsf {RAT}\) on l). This situation occurs when a subset of the witness implies the full witness via unit propagation. We minimize the witness by searching for the smallest witness \(\omega ' \subseteq \omega \) such that \(\omega '\) implies \(\omega \) via unit propagation. Only clauses reduced by \(\omega '\) and satisfied by \(\omega \) need to be weakened in phase (II) and strengthened in (IV).
5.3 Avoiding Copying
In some quite specific case, one can avoid copying the clauses that are reduced, but not satisfied by the witness altogether. In other words skip phase (I) and (V) of the simulation algorithm. This case, however, occurred frequently in our \(\mathsf {PR}\) proofs. Let \(\alpha \) denote the smallest assignment that falsifies the \(\mathsf {PR}\) clause C to be added. Furthermore, let \(\omega \) be the witness and \(\omega '\) the minimized witness as discussed above. The condition for avoiding clause copying consists of two parts. First, there is no literal \(l \in \alpha \) such that \(\bar{l} \in \omega '\). Recall that there always exists a literal \(l \in \alpha \) such that \(\bar{l} \in \omega \). So witness minimization is necessary. Second, for each literal \(l \in \omega '\), the unit clause (l) should be a \(\mathsf {RAT}\) on l w.r.t. the current formula without the involved clauses under \(\alpha \). Although both conditions are very restrictive, they apply often in the \(\mathsf {PR}\) proofs used in the evaluation.
Basically, this optimization removes phases (I) and (V), and modifies (II), (III), and (IV). The modified phases are named phase (i), (ii), and (iii), resp.

(i)
Weaken the clauses that are reduced by \(\omega '\) and satisfied by \(\omega \).
Clause \(E \in F\) is called involved if it is reduced by the reduced witness \(\omega '\) and satisfied by the original \(\omega \). The first phase weakens all involved clauses E to \((x \vee E)\) as follows. First, we add the implication \(x \Rightarrow \omega ' \cup \alpha \), or in clauses \((\overline{x} \vee l)\) with \(l \in \omega ' \cup \alpha \). These clauses are blocked because G does not contain clauses with literal x. Now we can weaken the involved clauses using SSLA. Then we remove the implication part \(x \Rightarrow \omega '\), but keep \(x \Rightarrow \alpha \). When adding this implication, the clauses \((\overline{x} \vee l)\) with \(l \in \omega '\) were blocked on x. Now we can remove them, because they have \(\mathsf {RAT}\) on l as all clauses containing \(\bar{l}\) have been either weakened (if they were satisfied by \(\omega \)) or are implied by \(\alpha \) by the second condition. The resulting formula, G in which all involved clauses are weakened and includes \(x \Rightarrow \alpha \), is denoted by \({G^{\tiny \,(\text {i})}}\).

(ii)
Add the weakened \(\mathsf {PR}\) clause.
Add the clause \((x \vee C)\), which is equivalent to the implication \(x \Leftarrow \alpha \), to \({G^{\tiny \,(\text {i})}}\), resulting in \({G^{\tiny \,(\text {ii})}}\). The only clauses containing literal \(\overline{x}\) are the ones that originate from \(x \Rightarrow \alpha \). As a consequence, \((x \vee C)\) is blocked on x w.r.t. \({G^{\tiny \,(\text {i})}}\).

(iii)
Strengthen all weakened clauses.
The third phase removes all occurrences of the literal x from clauses in \({G^{\tiny \,(\text {ii})}}\), thereby reversing the second phase and strengthening \((x \vee C)\) to C. This phase consists of four parts. First, we reintroduce the implication part \(x \Rightarrow \omega '\), or in clauses \((\overline{x} \vee l)\) with \(l \in \omega '\). Again, these clauses have \(\mathsf {RAT}\) on l w.r.t. \({G^{\tiny \,(\text {ii})}}\). Second, we remove the implication part \(x \Rightarrow \alpha \), i.e. the clauses \((\overline{x} \vee l)\) with \(l \in \alpha \). Afterwards, we strengthen \((x \vee C)\) to C and all clauses \((x \vee E) \in {G^{\tiny \,(\text {ii})}}\) to E. Observe that all clauses \((x \vee E) \in {G^{\tiny \,(\text {ii})}}\) including \((x \vee C)\) are satisfied by \(\omega \) and therefore there exists a clause \((\overline{x} \vee l)\) with \(l \in E\). SSLE can therefore remove all literals x. Finally, the implication \(x \Rightarrow \omega '\) is no longer required. The clauses \((\overline{x} \vee l)\) with \(l \in \omega '\) can be eliminated because literal \(\overline{x}\) has become pure due to the strengthening of all clauses containing literal x. The resulting formula, i.e., \({G^{\tiny \,(\text {ii})}}\) after removing all occurrences of literal x, is denoted by \({G^{\tiny \,(\text {iii})}}\) and equals \(G \wedge C\).
In case the \(\mathsf {PR}\) derivation is a refutation, we can further optimize this case, by changing phase (i) as follows: Instead of adding the implication \(x \Rightarrow \omega ' \cup \alpha \), the implication \(x \Rightarrow \alpha \) is added. Without the addition of the implication part \(x \Rightarrow \omega '\), we can also discard removing that part at the end of phase (i).
6 Alternative Simulation Algorithms
Even though the conversion from \(\mathsf {PR}\) derivations to \(\mathsf {DRAT}\) derivations is arguably the most useful one in practice, one can also consider the following alternatives.
6.1 Limiting the Number of \(\mathsf {RAT}\) Steps
Most steps in the simulation algorithm are “basic” steps, i.e., selfsubsuming literal addition or elimination and blocked clause addition or elimination. There are only few “full” \(\mathsf {RAT}\) addition steps: The removal of the implication in phase (II), the addition of the weakened \(\mathsf {PR}\) clause in phase (III) and the addition of the implication in phase (IV). It is interesting to explore the option to reduce the number of these “full” \(\mathsf {RAT}\) addition steps. Eliminating “full” \(\mathsf {RAT}\) addition steps brings us close to a simulation algorithm with only basic steps.
It is easy to eliminate all but one “full” \(\mathsf {RAT}\) addition step. In order to eliminate the \(\mathsf {RAT}\) steps in phase (II), one can weaken the clauses (i.e., add a literal x using SSLA) that are reduced but not satisfied by the witness using the shortened copies of clauses that are reduced, but not satisfied by \(\omega \). After the weakening, we can remove the implication \(x \Rightarrow \omega \) using blocked clause elimination (instead of \(\mathsf {RAT}\)), because now all clauses that are touched by \(\omega \) have a literal x. Therefore all clauses \((\overline{x} \vee l)\) with \(l \in \omega \) are blocked on l. The weakening also allows adding the implication \(x \Rightarrow \omega \) in phase (IV) using blocked clause addition steps (instead of \(\mathsf {RAT}\)). The strengthening of the newly weakened clause can be performed in phase (IV) using SSLE (after adding the implication). It is not obvious how to replace the only remaining \(\mathsf {RAT}\) addition in phase (III) using basic steps.
6.2 Converting \(\mathsf {DPR}\) Proofs into \(\mathsf {DRAT}\) Proofs
So far we only considered converting a \(\mathsf {PR}\) clause addition as a sequence of \(\mathsf {DRAT}\) steps and ignored deletion of \(\mathsf {PR}\) clauses from a formula. In most cases, clause deletion steps in a proof facilitate more efficient checking of a proof of unsatisfiability and can therefore be deleted without any checking. However, there are situations in which one wants to check the validity of clause deletion steps. In particular for proofs of satisfiability, i.e., a sequence of proof steps that show that a given formula is equivalent to the empty formula and thus satisfiable.
The \(\mathsf {DPR}\) proof system is a clausal proof system that allows the addition and deletion of \(\mathsf {PR}\) clauses. Conversion of a \(\mathsf {PR}\) clause addition step into \(\mathsf {DRAT}\) proof steps is equivalent to the conversion of such a step in the \(\mathsf {PR}\) proof system. The conversion of a \(\mathsf {PR}\) clause deletion step is slightly different. Given a formula F and a clause \(C \in F\), which is a \(\mathsf {PR}\) clause w.r.t. F with witness \(\omega \). The first phase of the conversion is exactly the same as phase (I) of the \(\mathsf {PR}\) clause addition conversion. The second phase of the conversion is slightly different compared to phase (II) of the \(\mathsf {PR}\) clause addition conversion: Instead of weakening all clauses reduced and satisfied by \(\omega \), we weaken all clauses satisfied by \(\omega \). Notice that this includes weakening C to \((x \vee C)\). The third phase consists of deleting \((x \vee C)\) from the current formula. Recall that phase (III) of the \(\mathsf {PR}\) clause addition conversion added \((x \vee C)\). The final phase corresponds to phases (IV) and (V).
6.3 Converting \(\mathsf {PR}\) Refutations into \(\mathsf {RAT}\) Refutations
The presented simulation algorithm converts \(\mathsf {PR}\) derivations into \(\mathsf {DRAT}\) derivations. We selected the \(\mathsf {DRAT}\) proof system as target, because it is the most widelysupported proof system by toptier SAT solvers and it allows stepwise simulation using deletion steps. The question arises whether deletion steps are required when converting a \(\mathsf {PR}\) refutation. In short, the answer is no when allowing the introduction of arbitrary many new Boolean variables. Converting a deletion step can be realized as follows. Let C be the clause that is deleted from a formula F. For each \(x \in var (C)\), add to F the equivalence \(x' \Leftrightarrow x\) with \(x'\) being a new variable. Afterwards, copy all clauses in F —apart from C— that contain at least one literal l with \( var (l) \in var (C)\) using the new \(x'\) variables instead of the old x variables. Finally replace all occurrences of old literals x and \(\overline{x}\) in the remaining proof by literals \(x'\) and \(\overline{x}'\), respectively.
In order to limit the number of copy operations, one can group (consecutive) deletion steps and use the same variables \(x'\) for the group. The simulation algorithm can be partitioned into two groups of (consecutive) clause addition steps that are followed each by groups of consecutive clause deletion steps: The first group of addition steps consists of phase (I) and the first half of phase (II), i.e., adding the implication \(x \Rightarrow \omega \) and the weakened involved clauses. The first group of deletion steps consists of the remaining part of phase (II), i.e., deletion of the involved clauses and deletion of the implication \(x \Rightarrow \omega \). The second group of consecutive addition steps consists of phase (III) and the first half of phase (IV), i.e, adding the implication \(x \Rightarrow \omega \) and adding back the involved clauses. The second group of consecutive deletion steps consists of the remaining part of phase (IV), i.e., removal of the weakened involved clauses and the implication \(x \Rightarrow \omega \), and phase (V). By grouping the deletion steps, one can convert \(\mathsf {PR}\) refutations into \(\mathsf {RAT}\) refutations with at most a quadratic blowup, so the same worst case complexity as converting \(\mathsf {PR}\) derivations into \(\mathsf {DRAT}\) derivations.
7 Evaluation
We implemented a tool, called PR2DRAT, to convert \(\mathsf {PR}\) proofs into \(\mathsf {DRAT}\) proofs^{Footnote 1} and evaluated the tool on short \(\mathsf {PR}\) proofs for hard formulas from three families:
(1) pigeonhole, (2) twopigeonsperhole [2], and (3) Tseitin formulas [4, 27].
Every resolution proof of a formula in these families is exponential in the size of the formula [11, 28]. As a consequence, any CDCL solver without dedicated special reasoning techniques, such as cardinality or XOR reasoning, is unable to solve these benchmarks in reasonable time. In contrast, our \(\mathsf {PR}\) proofs are smaller than the formulas, so linear in size. The \(\mathsf {PR}\) proofs of the pigeonhole formulas and twopigeonsperhole formulas have been constructed manually in earlier work [17]. The proofs of the Tseitin formulas have been manually constructed by expressing Gaussian elimination in the \(\mathsf {PR}\) system. Applying Gaussian elimination —after syntactically extracting XOR constraints from the CNF formulas— is enough to solve these formulas. We will first evaluate the size of the conversion. Afterwards we certify for the first time the short \(\mathsf {PR}\) proofs by converting them into \(\mathsf {DRAT}\) proofs which are checked by a formally verified checker.
7.1 Proof Simulation and Optimization
We will compare three kinds of \(\mathsf {DRAT}\) proofs for the benchmarks used in the experiments: the most compact existing ones [14, 15], the proofs obtained from using our plain conversion algorithm, and the proofs obtained from our optimized algorithm. The most compact existing ones originate from expressing symmetry breaking as \(\mathsf {DRAT}\) proof steps. Table 1 shows the comparison. All proofs have been trimmed using the DRATtrim tool [12] once. Applying DRATtrim multiple rounds (using the output proof as input proof for the next round) allows further reduction of the proof size, but typically these extra reductions are small.
For pigeonhole formulas over n pigeons, the most compact existing proofs have \(\mathcal {O}(n^4)\) proof steps. This is also the case for the \(\mathsf {DRAT}\) proofs obtained through our basic conversion algorithm as well as for the extended resolution proofs by Cook [5]. However, \(\mathsf {DRAT}\) proofs obtained with our optimized algorithm have only \(\mathcal {O}(n^3)\) proof steps. Notice that the size of pigeonhole formulas as well as the size of \(\mathsf {PR}\) proofs are both in \(\mathcal {O}(n^3)\). In other words, our optimized conversion algorithm cannot only produce \(\mathsf {DRAT}\) proofs, but for pigeonhole formulas it generates the first \(\mathsf {DRAT}\) proofs of linear size.
The results for the twopigeonsperhole formulas are similar, but more pronounced: There exist only \(\mathsf {DRAT}\) proofs of the formulas up to 12 holes and 25 pigeons (tph12) [15]. Our plain simulation algorithm can produce \(\mathsf {DRAT}\) proofs of the formulas up to 20 holes and 41 pigeons (tph20). Moreover, our optimized simulation algorithm is able to produce proofs that are linear in size of the formulas, although not linear in the size of the \(\mathsf {PR}\) proofs.
We are unaware of any \(\mathsf {DRAT}\) proofs of hard Tseitin formulas, e.g., from the Urquharts5b* family [4], nor of any tool able to produce such \(\mathsf {DRAT}\) proofs. However, we succeeded to manually produce short \(\mathsf {PR}\) proofs without new variables for these formulas and convert them into \(\mathsf {DRAT}\) proofs. The resulting \(\mathsf {DRAT}\) proofs, with and without optimizations, are relatively large compared to the \(\mathsf {PR}\) proofs. The blowup is close to the quadratic worse case. We observed that DRATtrim was able to remove many (around \(70\%\)) of clause additions, which suggests that there could be an optimization to generate shorter \(\mathsf {DRAT}\) proofs.
7.2 Verified \(\mathsf {PR}\) Proof Checking
Our proof simulation approach can be used to validate \(\mathsf {PR}\) proofs with formally verified tools and thereby increasing the confidence in their correctness. The tool chain works as follows: Given a formula F and an alleged \(\mathsf {PR}\) proof \(P_\mathsf {PR}\) of F, our tool PR2DRAT converts \(P_\mathsf {PR}\) into a \(\mathsf {DRAT}\) proof \(P_\mathsf {DRAT}\). Afterwards, we use the DRATtrim tool to convert \(P_\mathsf {DRAT}\) into a \(\mathsf {CLRAT}\) (compressed linear \(\mathsf {RAT}\)) proof \(P_\mathsf {CLRAT}\). \(\mathsf {CLRAT}\) proofs can be efficiently checked using formally verified checkers [7]. We used the verified checker ACL2check [13] to certify that \(P_\mathsf {CLRAT}\) is a valid proof of unsatisfiability of F. Notice that the tools PR2DRAT and DRATtrim are unverified and thus may turn an invalid proof into a valid proof or vice versa.
Figure 1 shows the results of applying this tool chain on the benchmark suite. The PR2DRAT tool was able to convert each \(\mathsf {PR}\) proof into a \(\mathsf {DRAT}\) proof in less than a minute and half of the proofs in less than a second. The runtimes of DRATtrim and ACL2check are one to two orders of magnitude higher than for PR2DRAT. Thus our tool adds little overhead to the tool chain. The sizes of the \(\mathsf {DRAT}\) and \(\mathsf {CLRAT}\) proofs are comparable. However, these proofs are different: DRATtrim (A) removes redundant clause additions; (B) includes hints to speedup verified checking; (C) compresses proofs. The effect of (A) depends on proof quality; (B) increases the size of proofs of small hard problems by roughly a factor of four; (C) reduces size to \(30\%\) of the uncompressed proofs. The difference between the \(\mathsf {DRAT}\) and \(\mathsf {CLRAT}\) proofs therefore indicate how much redundancy was removed: for pigeonhole proofs hardly anything, for twopigeonsperhole proofs a modest amount, and for Tseitin proofs a lot. Notice that runtimes of the verified checker ACL2check are comparable to the Cbased checker DRATtrim.
8 Conclusions and Future Work
We showed how to convert \(\mathsf {PR}\) proofs into \(\mathsf {DRAT}\) proofs using only a single new variable with an at most quadratic blowup in proof size. This result suggests that it might also be possible to construct \(\mathsf {DRAT}\) proofs without new variables using one variable elimination step and reusing the eliminated variable. The optimizations implemented in our conversion tool PR2DRAT made it possible to produce \(\mathsf {DRAT}\) proofs for hard problems that are significantly smaller compared to existing \(\mathsf {DRAT}\) proofs of those problems. The main open question is whether \(\mathsf {PR}\) proofs can be converted into \(\mathsf {RAT}\) proofs (i.e., not allowing the deletion steps) with a small number of new variables. Without deletion steps, it seems that copying the formula using new variables is required.
Our new tool chain for certifying SAT solving results using \(\mathsf {PR}\) proofs consists of four steps: proof production (solving), conversion from \(\mathsf {PR}\) to \(\mathsf {DRAT}\), conversion from \(\mathsf {DRAT}\) to \(\mathsf {CLRAT}\), and validation of the \(\mathsf {CLRAT}\) proof using a formally verified checker. In order to fasten adaptation of the approach, we are exploring elimination of the second step, by integrating the conversion algorithm in either SAT solvers or in \(\mathsf {DRAT}\) proof checkers.
Notes
 1.
The tool, checkers, formulas, and proofs discussed in this section are available at http://www.cs.utexas.edu/~marijn/pr2drat/.
References
Aloul, F.A., Ramani, A., Markov, I.L., Sakallah, K.A.: Solving difficult SAT instances in the presence of symmetry. In: Proceedings of the 39th Design Automation Conference, 2002, pp. 731–736 (2002)
Biere, A.: Two pigeons per hole problem. In: Proceedings of SAT Competition 2013: Solver and Benchmark Descriptions, p. 103 (2013)
Biere, A., Cimatti, A., Clarke, E.M., Fujita, M., Zhu, Y.: Symbolic model checking using SAT procedures instead of BDDs. In: DAC, pp. 317–320 (1999)
Chatalic, P., Simon, L.: Multiresolution on compressed sets of clauses. In: 12th IEEE International Conference on Tools with Artificial Intelligence (ICTAI 2000), 13–15 November 2000, Vancouver, BC, Canada, pp. 2–10 (2000)
Cook, S.A.: A short proof of the pigeon hole principle using extended resolution. SIGACT News 8(4), 28–32 (1976)
Crawford, J.M., Ginsberg, M.L., Luks, E.M., Roy, A.: Symmetrybreaking predicates for search problems. In: Knowledge Representation and Reasoning – KR 1996, pp. 148–159. Morgan Kaufmann (1996)
CruzFilipe, L., Heule, M.J.H., Hunt Jr., W.A., Kaufmann, M., SchneiderKamp, P.: Efficient certified RAT verification. In: de Moura, L. (ed.) CADE 2017. LNCS (LNAI), vol. 10395, pp. 220–236. Springer, Cham (2017)
CruzFilipe, L., MarquesSilva, J., SchneiderKamp, P.: Efficient certified resolution proof checking. In: Legay, A., Margaria, T. (eds.) TACAS 2017. LNCS, vol. 10205, pp. 118–135. Springer, Heidelberg (2017)
D’Silva, V., Kroening, D., Weissenbacher, G.: A survey of automated techniques for formal software verification. IEEE Trans. CAD Integr. Circuits Syst. 27(7), 1165–1178 (2008)
Eén, N., Biere, A.: Effective preprocessing in SAT through variable and clause elimination. In: Bacchus, F., Walsh, T. (eds.) SAT 2005. LNCS, vol. 3569, pp. 61–75. Springer, Heidelberg (2005)
Haken, A.: The intractability of resolution. Theor. Comput. Sci. 39, 297–308 (1985)
Heule, M.J.H.: The DRAT format and DRATtrim checker. CoRR, abs/1610.06229 (2016)
Heule, M.J.H., Hunt Jr., W.A., Kaufmann, M., Wetzler, N.D.: Efficient, verified checking of propositional proofs. In: AyalaRincón, M., Muñoz, C.A. (eds.) ITP 2017. LNCS, vol. 10499, pp. 269–284. Springer, Cham (2017)
Heule, M.J.H., Hunt Jr., W.A., Wetzler, N.D.: Verifying refutations with extended resolution. In: Bonacina, M.P. (ed.) CADE 2013. LNCS (LNAI), vol. 7898, pp. 345–359. Springer, Heidelberg (2013)
Heule, M.J.H., Hunt Jr., W.A., Wetzler, N.D.: Expressing symmetry breaking in DRAT proofs. In: Felty, A.P., Middeldorp, A. (eds.) CADE 2015. LNCS (LNAI), vol. 9195, pp. 591–606. Springer, Cham (2015)
Heule, M.J.H., Järvisalo, M., Lonsing, F., Seidl, M., Biere, A.: Clause elimination for SAT and QSAT. J. Artif. Intell. Res. 53, 127–168 (2015)
Heule, M.J.H., Kiesl, B., Biere, A.: Short proofs without new variables. In: de Moura, L. (ed.) CADE 2017. LNCS (LNAI), vol. 10395, pp. 130–147. Springer, Cham (2017)
Heule, M.J.H., Kiesl, B., Seidl, M., Biere, A.: PRuning through satisfaction. In: Strichman, O., TzorefBrill, R. (eds.) HVC 2017. LNCS, vol. 10629, pp. 179–194. Springer, Cham (2017)
Heule, M.J.H., Seidl, M., Biere, A.: A unified proof system for QBF preprocessing. In: Demri, S., Kapur, D., Weidenbach, C. (eds.) IJCAR 2014. LNCS (LNAI), vol. 8562, pp. 91–106. Springer, Cham (2014)
Heule, M.J.H., Seidl, M., Biere, A.: Solution validation and extraction for QBF preprocessing. J. Autom. Reason. 58(1), 97–125 (2017)
Järvisalo, M., Heule, M.J.H., Biere, A.: Inprocessing rules. In: Gramlich, B., Miller, D., Sattler, U. (eds.) IJCAR 2012. LNCS (LNAI), vol. 7364, pp. 355–370. Springer, Heidelberg (2012)
Kiesl, B., Seidl, M., Tompits, H., Biere, A.: Superblocked clauses. In: Olivetti, N., Tiwari, A. (eds.) IJCAR 2016. LNCS (LNAI), vol. 9706, pp. 45–61. Springer, Cham (2016)
Kullmann, O.: On a generalization of extended resolution. Discrete Appl. Math. 96–97, 149–176 (1999)
Lammich, P.: Efficient verified (UN)SAT certificate checking. In: de Moura, L. (ed.) CADE 2017. LNCS (LNAI), vol. 10395, pp. 237–254. Springer, Cham (2017)
Luo, M., Li, C., Xiao, F., Manyà, F., Lü, Z.: An effective learnt clause minimization approach for CDCL SAT solvers. In: Proceedings of the TwentySixth International Joint Conference on Artificial Intelligence, IJCAI 2017, pp. 703–711 (2017)
MarquesSilva, J.P., Lynce, I., Malik, S.: Conflictdriven clause learning SAT solvers. In: FAIA, vol. 185, Chap. 4, pp. 131–153. IOS Press, February 2009
Tseitin, G.S.: On the complexity of derivation in propositional calculus. In: Siekmann, J.H., Wrightson, G. (eds.) Automation of Reasoning 2, pp. 466–483. Springer, Heidelberg (1983)
Urquhart, A.: Hard examples for resolution. J. ACM 34(1), 209–219 (1987)
Wetzler, N.D., Heule, M.J.H., Hunt Jr., W.A.: DRATtrim: efficient checking and trimming using expressive clausal proofs. In: Sinz, C., Egly, U. (eds.) SAT 2014. LNCS, vol. 8561, pp. 422–429. Springer, Cham (2014)
Zhang, L., Malik, S.: Validating SAT solvers using an independent resolutionbased checker: practical implementations and other applications. In: DATE, pp. 10880–10885 (2003)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this book are included in the book's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the book's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2018 The Author(s)
About this paper
Cite this paper
Heule, M.J.H., Biere, A. (2018). What a Difference a Variable Makes. In: Beyer, D., Huisman, M. (eds) Tools and Algorithms for the Construction and Analysis of Systems. TACAS 2018. Lecture Notes in Computer Science(), vol 10806. Springer, Cham. https://doi.org/10.1007/9783319899633_5
Download citation
DOI: https://doi.org/10.1007/9783319899633_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 9783319899626
Online ISBN: 9783319899633
eBook Packages: Computer ScienceComputer Science (R0)