# Encoding Redundancy for Satisfaction-Driven Clause Learning

- 7 Citations
- 6.9k Downloads

## Abstract

Satisfaction-Driven Clause Learning (SDCL) is a recent SAT solving paradigm that aggressively trims the search space of possible truth assignments. To determine if the SAT solver is currently exploring a dispensable part of the search space, SDCL uses the so-called positive reduct of a formula: The positive reduct is an easily solvable propositional formula that is satisfiable if the current assignment of the solver can be safely pruned from the search space. In this paper, we present two novel variants of the positive reduct that allow for even more aggressive pruning. Using one of these variants allows SDCL to solve harder problems, in particular the well-known Tseitin formulas and mutilated chessboard problems. For the first time, we are able to generate and automatically check clausal proofs for large instances of these problems.

## 1 Introduction

*Conflict-driven clause learning* (CDCL) [26, 28] is the most successful paradigm for solving satisfiability (SAT) problems and therefore CDCL solvers are pervasively used as reasoning engines to construct and verify systems. However, CDCL solvers still struggle to handle some important applications within reasonable time. These applications include the verification of arithmetic circuits, challenges from cryptanalysis, and hard combinatorial problems. There appears to be a theoretical barrier to dealing with some of these applications efficiently.

At its core, CDCL is based on the resolution proof system, which means that the same limitations that apply to resolution also apply to CDCL. Most importantly, there exist only exponentially large resolution proofs for several seemingly easy problems [15, 33], implying that CDCL solvers require exponential time to solve them. A recent approach to breaking this exponential barrier is the *satisfaction-driven clause learning* (SDCL) paradigm [20], which can automatically find short proofs of pigeon-hole formulas in the \(\mathsf {PR}\) proof system [19].

SDCL extends CDCL by pruning the search space of truth assignments more aggressively. While a pure CDCL solver learns only clauses that can be efficiently derived via resolution, an SDCL solver also learns stronger clauses. The initial approach to learning these clauses is based on the so-called *positive reduct*: Given a formula and a partial truth assignment, the positive reduct is a simple propositional formula encoding the question of whether the assignment can be pruned safely from the search space. In cases where the positive reduct is satisfiable, the solver performs the pruning by learning a clause that blocks the assignment.

Although the original SDCL paradigm can solve the hard pigeon-hole formulas, we observe that it is not sophisticated enough to deal with other hard formulas that require exponential-size resolution proofs, such as Tseitin formulas over expander graphs [32, 33] or mutilated chessboard problems [1, 13, 27]. In this paper, we deal with this issue and present techniques that improve the SDCL paradigm. In particular, we introduce new variants of the above-mentioned positive reduct that allow SDCL to prune the search space even more aggressively.

In a first step, we explicitly formalize the notion of a *pruning predicate*: For a formula *F* and a (partial) assignment \(\alpha \), a pruning predicate is a propositional formula that is satisfiable if \(\alpha \) can be pruned in a satisfiability-preserving way. Ideally, a pruning predicate is easily solvable while still pruning the search space as much as possible. We then present two novel pruning predicates of which one, the *filtered positive reduct*, is easier to solve and arguably more useful in practice while the other, the \(\mathsf {PR}\) *reduct*, allows for stronger pruning.

In many applications, it is not enough that a solver just provides a simple yes/no answer. Especially when dealing with mathematical problems or safety-critical systems, solvers are required to provide automatically checkable proofs that certify the correctness of their answers. The current state of the art in proof generation and proof checking is to focus on *clausal proofs*, which are specific sequences of clause additions and clause removals. Besides the requirement that SAT solvers in the main track of the SAT competition must produce such clausal proofs, there also exist corresponding proof checkers whose correctness has been verified by theorem provers, as first proposed in a seminal TACAS’17 paper [12].

We implemented a new SDCL solver, called SaDiCaL, that can solve the pigeon-hole formulas, the Tseitin formulas, and the mutilated chessboard problems due to using the filtered positive reduct. Our solver also produces \(\mathsf {PR}\) proofs [19]. We certify their correctness by translating them via \(\mathsf {DRAT}\) proofs [17] to \(\mathsf {LRAT}\) proofs, which are then validated by a formally verified proof checker [18].

Existing approaches to solving the Tseitin formulas are based on symmetry breaking [14] or algebraic reasoning, in particular Gaussian elimination [3, 9, 21, 31]. However, the respective tools do not output machine-checkable proofs. Moreover, approaches based on symmetry breaking and Gaussian elimination depend strongly on the syntactic structure of formulas to identify symmetries and cardinality constraints, respectively. They are therefore vulnerable to syntactic changes that do not affect the semantics of a formula. In contrast, SDCL reasons on the semantic level, making it less prone to syntactic changes.

The main contributions of this paper are as follows: (1) We explicitly formulate the notion of a pruning predicate, which was used only implicitly in the original formulation of SDCL. (2) We present two novel pruning predicates that generalize the positive reduct. (3) We implemented a new SDCL solver, called SaDiCaL, that uses one of our new pruning predicates. (4) We show by an experimental evaluation that this new pruning predicate enables SaDiCaL to produce short proofs (without new variables) of Tseitin formulas and of mutilated chessboard problems.

## 2 Preliminaries

*Propositional logic.* We consider propositional formulas in *conjunctive normal form* (CNF), which are defined as follows. A *literal* is defined to be either a variable *x* (a *positive literal*) or the negation \(\overline{x}\) of a variable *x* (a *negative literal*). The *complement* \(\overline{l}\) of a literal *l* is defined as \(\overline{l} = \overline{x}\) if \(l = x\) and \(\overline{l} = x\) if \(l = \overline{x}\). Accordingly, for a set *L* of literals, we define \(\overline{L} = \{\overline{l} \mid l \in L\}\). A *clause* is a disjunction of literals. A *formula* is a conjunction of clauses. We view clauses as sets of literals and formulas as sets of clauses. For a set *L* of literals and a formula *F*, we define \(F_L = \{ C \in F \mid C \cap L \ne \emptyset \}\). By \( var (F)\) we denote the variables of a literal, clause, or formula *F*. For convenience, we treat \( var (F)\) as a variable if *F* is a literal, and as a set of variables otherwise.

*Satisfiability.* An *assignment* is a function from a set of variables to the truth values 1 (*true*) and 0 (*false*). An assignment is *total* w.r.t. a formula *F* if it assigns a truth value to all variables \( var (F)\) occurring in *F*; otherwise it is \(partial \). A literal *l* is *satisfied* by an assignment \(\alpha \) if *l* is positive and \(\alpha ( var (l)) = 1\) or if it is negative and \(\alpha ( var (l)) = 0\). A literal is *falsified* by an assignment \(\alpha \) if its complement is satisfied by \(\alpha \). A clause is satisfied by an assignment \(\alpha \) if it contains a literal that is satisfied by \(\alpha \). Finally, a formula is satisfied by an assignment \(\alpha \) if all its clauses are satisfied by \(\alpha \). A formula is *satisfiable* if there exists an assignment that satisfies it. We often denote assignments by sequences of the literals they satisfy. For instance, \(x\overline{y}\) denotes the assignment that assigns \(1\) to *x* and \(0\) to *y*. For an assignment \(\alpha \), \( var (\alpha )\) denotes the variables assigned by \(\alpha \). For a set *L* of non-contradictory literals, we denote by \(\alpha _L\) the assignment obtained from \(\alpha \) by making all literals in *L* true and assigning the same value as \(\alpha \) to other variables not in \( var (L)\).

*Formula simplification.*We refer to the empty clause by \(\bot \). Given an assignment \(\alpha \) and a clause

*C*, we define Open image in new window if \(\alpha \) satisfies

*C*; otherwise, Open image in new window denotes the result of removing from

*C*all the literals falsified by \(\alpha \). For a formula

*F*, we define Open image in new window . We say that an assignment \(\alpha \)

*touches*a clause

*C*if \( var (\alpha ) \cap var (C) \ne \emptyset \). A

*unit clause*is a clause with only one literal. The result of applying the

*unit clause rule*to a formula

*F*is the formula Open image in new window where (

*l*) is a unit clause in

*F*. The iterated application of the unit clause rule to a formula

*F*, until no unit clauses are left, is called

*unit propagation*. If unit propagation yields the empty clause \(\bot \), we say that unit propagation applied to

*F*derived a

*conflict*.

*Formula relations.* Two formulas are *logically equivalent* if they are satisfied by the same total assignments. Two formulas are *equisatisfiable* if they are either both satisfiable or both unsatisfiable. Furthermore, by \(F \mathrel {\vdash _{^{_{1}}}}G\) we denote that for every clause \((l_1 \vee \dots \vee l_n) \in G\), unit propagation applied to \(F \wedge (\overline{l}_1) \wedge \dots \wedge (\overline{l}_n)\) derives a conflict. If \(F \mathrel {\vdash _{^{_{1}}}}G\), we say that *F* implies *G* via unit propagation. For example, \((\overline{a} \vee c) \wedge (\overline{b} \vee \overline{c})\) implies \((\overline{a} \vee \overline{b})\) via unit propagation since unit propagation derives a conflict on \((\overline{a} \vee c) \wedge (\overline{b} \vee \overline{c}) \wedge (a) \wedge (b)\).

*Conflict-Driven Clause Learning*(

*CDCL*). To determine whether a formula is satisfiable a CDCL solver iteratively performs the following operations (obtained from the pseudo code in Fig. 1 by removing the lines Open image in new window ): First, the solver performs unit propagation until either it derives a conflict or the formula contains no more unit clauses. If it derives a conflict, it analyzes the conflict to learn a clause that prevents it from repeating similar (bad) decisions in the future (“clause learning”). If this learned clause is the (unsatisfiable) empty clause \(\bot \), the solver can conclude that the formula is unsatisfiable. In case it is not the empty clause, the solver revokes some of its variable assignments (“backjumping”) and then repeats the whole procedure again by performing unit propagation. If, however, the solver does not derive a conflict, there are two options: Either all variables are assigned, in which case the solver can conclude that the formula is satisfiable, or there are still unassigned variables, in which case the solver first assigns a truth value to an unassigned variable (the actual variable and the truth value are chosen based on a so-called

*decision heuristic*) and then continues by again performing unit propagation. For more details see the chapter on CDCL [25] in the Handbook of Satisfiability [7].

*Satisfaction-Driven Clause Learning* (*SDCL*). The Open image in new window algorithm [20], shown in Fig. 1, is a generalization of CDCL that is obtained by adding lines Open image in new window to the CDCL algorithm. In CDCL, if unit propagation does not derive a conflict, the solver picks a variable and assigns a truth value to it. In contrast, an SDCL solver does not necessarily assign a new variable in this situation. Instead, it first checks if the current assignment can be pruned from the search space without affecting satisfiability. If so, the solver prunes the assignment by learning a new clause (Fig. 2 illustrates how clause learning can prune the search space). This clause is returned by the AnalyzeWitness() function and usually consists of the decision literals of the current assignment (although other ways of computing the clause are possible, c.f. [20]). If the assignment cannot be pruned, the solver proceeds by assigning a new variable—just as in CDCL. To check if the current assignment can be pruned, the solver produces a propositional formula that should be easier to solve than the original formula and that can only be satisfiable if the assignment can be pruned. Thus, an SDCL solver solves several easier formulas in order to solve a hard formula. In this paper, we call these easier formulas *pruning predicates*. We first formalize the pruning predicate used in the original SDCL paper before we introduce more powerful pruning predicates.

## 3 Pruning Predicates and Redundant Clauses

As already explained informally, a pruning predicate is a propositional formula whose satisfiability guarantees that an assignment can be pruned from the search space. The actual pruning is then performed by adding the clause that *blocks* the assignment (or a subclause of this clause, as explained in detail later):

### Definition 1

Given an assignment \(\alpha = a_1 \dots a_k\), the clause \((\overline{a}_1 \vee \dots \vee \overline{a}_k)\) is the clause that *blocks* \(\alpha \).

The clause that blocks \(\alpha \) is thus the unique maximal clause falsified by \(\alpha \). Based on this notion, we define pruning predicates as follows:

### Definition 2

Let *F* be a formula and *C* the clause that blocks a given (partial) assignment \(\alpha \). A *pruning predicate* for *F* and \(\alpha \) is a formula \(P_{\alpha }(F)\) such that the following holds: if \(P_{\alpha }(F)\) is satisfiable, then *F* and \(F \wedge C\) are equisatisfiable.

Thus, if a pruning predicate for a formula *F* and an assignment \(\alpha \) is satisfiable, we can add the clause that blocks \(\alpha \) to *F* without affecting satisfiability. We thus say that this clause is *redundant* with respect to *F*. In the paper that introduces SDCL [20], the so-called *positive reduct* (see Definition 3 below) is used as a pruning predicate. The positive reduct is obtained from satisfied clauses of the original formula by removing unassigned literals.

*C*and an assignment \(\alpha \), we write \(\mathsf {touched}_{\alpha }(C)\) to denote the subclause of

*C*that contains exactly the literals assigned by \(\alpha \). Analogously, we denote by \(\mathsf {untouched}_{\alpha }(C)\) the subclause of

*C*that contains the literals

*not*assigned by \(\alpha \) [20].

### Definition 3

Let *F* be a formula and \(\alpha \) an assignment. Then, the *positive reduct* \(\mathsf {p}_{\alpha }(F)\) of *F* and \(\alpha \) is the formula \(G \wedge C\) where *C* is the clause that blocks \(\alpha \) and Open image in new window .

### Example 1

Let \(F = (x \vee \overline{y} \vee z) \wedge (w \vee \overline{y}) \wedge (\overline{w} \vee \overline{z})\) and \(\alpha = x\,y\,\overline{z}\). Then, the positive reduct \(\mathsf {p}_{\alpha }(F)\) of *F* w.r.t. \(\alpha \) is the formula \((x \vee \overline{y} \vee z) \wedge (\overline{z}) \wedge (\overline{x} \vee \overline{y} \vee z)\).

The positive reduct is satisfiable if and only if the clause blocked by \(\alpha \) is a *set-blocked clause* [23], short \(\mathsf {SET}\) clause, with respect to *F*. Since the addition of set-blocked clauses to a formula preserves satisfiability, it follows that the positive reduct is a pruning predicate. Moreover, since the problem of deciding whether a given clause is a set-blocked clause is \(\mathsf {NP}\)-complete, it is natural to use a SAT solver for finding set-blocked clauses.

Although set-blocked clauses can be found efficiently with the positive reduct, there are more general kinds of clauses whose addition can prune the search space more aggressively, namely *propagation-redundant clauses* (\(\mathsf {PR}\) clauses) and their subclass of *set-propagation-redundant clauses* (\(\mathsf {SPR}\) clauses) [19].

In the following, we thus introduce two different kinds of pruning predicates. Given a formula *F* and an assignment \(\alpha \), the first pruning predicate, called the *filtered positive reduct*, is satisfiable if and only if the clause that blocks \(\alpha \) is an \(\mathsf {SPR}\) clause in *F*. The second pruning predicate, called \(\mathsf {PR}\) *reduct*, is satisfiable if and only if the clause that blocks \(\alpha \) is a \(\mathsf {PR}\) clause; it allows us to prune more assignments than the filtered positive reduct but it is also harder to solve. The relationship between the redundant clause types and pruning predicates is shown in Fig. 3. According to [19], the definition of \(\mathsf {PR}\) clauses is as follows:

### Definition 4

Let *F* be a formula, *C* a clause, and \(\alpha \) the assignment blocked by *C*. Then, *C* is *propagation redundant* (\(\mathsf {PR}\)) with respect to *F* if there exists an assignment \(\omega \) such that Open image in new window and \(\omega \) satisfies *C*.

The clause *C* can be seen as a constraint that prunes all assignments that extend \(\alpha \) from the search space. Since Open image in new window implies Open image in new window via unit propagation, every assignment that satisfies Open image in new window also satisfies Open image in new window , and so we say that *F* is *at least as satisfiable* under \(\omega \) as it is under \(\alpha \). Moreover, since \(\omega \) satisfies *C*, it must disagree with \(\alpha \). Consider the following example [19]:

### Example 2

Let \(F = (x \vee y) \wedge (\overline{x} \vee y) \wedge (\overline{x} \vee z)\) be a formula, \(C = (x)\) a clause, and \(\omega = xz\) an assignment. Then, \(\alpha = \overline{x}\) is the assignment blocked by *C*. Now, consider Open image in new window and Open image in new window . Since unit propagation clearly derives a conflict on Open image in new window , we have Open image in new window and thus *C* is propagation redundant with respect to *F*.

The key property of propagation-redundant clauses is that their addition to a formula preserves satisfiability [19]. A strict subclass of propagation-redundant clauses are set-propagation-redundant clauses, which have the additional requirement that \(\omega \) must assign the same variables as \(\alpha \). For the following definition, recall (from the preliminaries) that \(\alpha _L\) denotes the assignment obtained from \(\alpha \) by assigning \(1\) to the literals in *L* [19]:

### Definition 5

Let *F* be a formula, *C* a clause, and \(\alpha \) the assignment blocked by *C*. Then, *C* is *set-propagation redundant* (\(\mathsf {SPR}\)) with respect to *F* if it contains a non-empty set *L* of literals such that Open image in new window .

If Open image in new window , we say *C* is \(\mathsf {SPR}\) *by* *L* with respect to *F*.

### Example 3

Let \(F = (x \vee y) \wedge (x \vee \overline{y} \vee z) \wedge (\overline{x} \vee z) \wedge (\overline{x} \vee u) \wedge (\overline{u} \vee x)\), \(C = x \vee u\), and \(L = \{x, u\}\). Then, \(\alpha = \overline{x}\overline{u}\) is the assignment blocked by *C*, and \(\alpha _L = xu\). Now, consider Open image in new window and Open image in new window . Clearly, Open image in new window and so *C* is set-propagation redundant by *L* with respect to *F*.

Most known types of redundant clauses are \(\mathsf {SPR}\) clauses [19]. This includes *blocked clauses* [24], *set-blocked clauses* [23], *resolution asymmetric tautologies* (\(\mathsf {RAT}\)s) [22], and many more. By introducing pruning predicates that allow us to add \(\mathsf {SPR}\) clauses and even \(\mathsf {PR}\) clauses to a formula, we thus allow for more effective pruning than with the positive reduct originally used in SDCL. We start by presenting our new *filtered* positive reduct.

## 4 The Filtered Positive Reduct

The original positive reduct of a formula *F* and an assignment \(\alpha \) is obtained by first taking all clauses of *F* that are satisfied by \(\alpha \) and then removing from these clauses the literals that are not touched (assigned) by \(\alpha \). The resulting clauses are then conjoined with the clause *C* that blocks \(\alpha \). We obtain the *filtered* positive reduct by not taking *all* satisfied clauses of *F* but only those for which the untouched part is not implied by Open image in new window via unit propagation:

### Definition 6

Let *F* be a formula and \(\alpha \) an assignment. Then, the *filtered positive reduct* \(\mathsf {f}_{\alpha }(F)\) of *F* and \(\alpha \) is the formula \(G \wedge C\) where *C* is the clause that blocks \(\alpha \) and Open image in new window

Clearly the filtered positive reduct is a subset of the positive reduct because Open image in new window implies Open image in new window . To see this, suppose Open image in new window . Then, Open image in new window is contained in Open image in new window and since Open image in new window , it follows that Open image in new window . Therefore, the filtered positive reduct is obtained from the positive reduct by removing (“filtering”) every clause \(D' = \mathsf {touched}_{\alpha }(D)\) such that Open image in new window .

The following example illustrates how the filtered positive reduct allows us to prune assignments that cannot be pruned when using only the positive reduct:

### Example 4

Let \(F = (x \vee y) \wedge (\overline{x} \vee y)\) and consider the assignment \(\alpha = x\). The positive reduct \(\mathsf {p}_{\alpha }(F) = (x) \wedge (\overline{x})\) is unsatisfiable and so it does not allow us to prune \(\alpha \). In contrast, the filtered positive reduct \(\mathsf {f}_{\alpha }(F) = (\overline{x})\), obtained by filtering out the clause (*x*), is satisfied by the assignment \(\overline{x}\). The clause (*x*) is not contained in the filtered reduct because \(\mathsf {untouched}_{\alpha }(x \vee y) = (y)\) and Open image in new window , which implies Open image in new window . Note that the clause \((\overline{x})\) is contained both in the positive reduct and in the filtered positive reduct since it blocks \(\alpha \).

The filtered positive reduct has a useful property: If a non-empty assignment \(\alpha \) falsifies a formula *F*, then the filtered positive reduct \(\mathsf {f}_{\alpha }(F)\) is satisfiable. To see this, observe that Open image in new window and so Open image in new window for every clause \(D \in F\) because unit propagation derives a conflict on Open image in new window alone (note that this also holds if \(\mathsf {untouched}_{\alpha }(D)\) is the empty clause \(\bot \)). Therefore, \(\mathsf {f}_{\alpha }(F)\) contains only the clause that blocks \(\alpha \), which is clearly satisfiable. The ordinary positive reduct does not have this property.

Note that the filtered positive reduct contains only variables of \( var (\alpha )\). Since it also contains the clause that blocks \(\alpha \), any satisfying assignment of the filtered positive reduct must disagree with \(\alpha \) on at least one literal. Hence, every satisfying assignment of the filtered positive reduct is of the form \(\alpha _L\) where *L* is a set of literals that are contained in the clause that blocks \(\alpha \). With the filtered positive reduct, we can identify exactly the clauses that are set-propagation redundant with respect to a formula:

### Theorem 1

Let *F* be a formula, \(\alpha \) an assignment, and *C* the clause that blocks \(\alpha \). Then, *C* is \(\mathsf {SPR}\) by an \(L \subseteq C\) with respect to *F* if and only if the assignment \(\alpha _L\) satisfies the filtered positive reduct \(\mathsf {f}_{\alpha }(F)\).

### Proof

For the “only if” direction, suppose *C* is \(\mathsf {SPR}\) by an \(L \subseteq C\) in *F*, meaning that Open image in new window . We show that \(\alpha _L\) satisfies all clauses of \(\mathsf {f}_{\alpha }(F)\). Let therefore \(D' \in \mathsf {f}_{\alpha }(F)\). By definition, \(D'\) is either the clause that blocks \(\alpha \) or it is of the form \(\mathsf {touched}_{\alpha }(D)\) for some clause \(D \in F\) such that Open image in new window . In the former case, \(D'\) is clearly satisfied by \(\alpha _L\) since \(\alpha _L\) must disagree with \(\alpha \). In the latter case, since Open image in new window , it follows that either Open image in new window or \(\alpha _L\) satisfies *D*. Now, if Open image in new window , it cannot be the case that Open image in new window since \( var (\alpha _L) = var (\alpha )\) and thus Open image in new window , which would imply Open image in new window . Therefore, \(\alpha _L\) must satisfy *D*. But then \(\alpha _L\) must satisfy \(D' = \mathsf {touched}_{\alpha }(D)\), again since \( var (\alpha _L) = var (\alpha )\).

For the “if” direction, assume that \(\alpha _L\) satisfies the filtered positive reduct \(\mathsf {f}_{\alpha }(F)\). We show that Open image in new window . Let Open image in new window . Since Open image in new window is contained in Open image in new window , we know that \(\alpha _L\) does not satisfy *D* and so it does not satisfy \(\mathsf {touched}_{\alpha }(D)\). Hence, \(\mathsf {touched}_{\alpha }(D)\) cannot be contained in \(\mathsf {f}_{\alpha }(F)\), implying that Open image in new window . But, Open image in new window since \( var (\alpha _L) = var (\alpha )\) and thus it follows that Open image in new window . \(\square \)

When the (ordinary) positive reduct is used for SDCL solving, the following property holds [20]: Assume the solver has a current assignment \(\alpha = \alpha _d \cup \alpha _u\) where \(\alpha _d\) consists of all the assignments that were made by the decision heuristic and \(\alpha _u\) consists of all assignments that were derived via unit propagation. If the solver then finds that the positive reduct of its formula and the assignment \(\alpha \) is satisfiable, it can learn the clause that blocks \(\alpha _d\) instead of the longer clause that blocks \(\alpha \), thus pruning the search space more effectively. This is allowed because the clause that blocks \(\alpha _d\) is guaranteed to be propagation redundant.

The same holds for the filtered positive reduct and the argument is analogous to the earlier one [20]: Assume the filtered positive reduct of *F* and \(\alpha = \alpha _d \cup \alpha _u\) is satisfiable. Then, the clause that blocks \(\alpha \) is set-propagation redundant with respect to *F* and thus there exists an assignment \(\alpha _L\) such that Open image in new window . But then, since unit propagation derives all the assignments of \(\alpha _u\) from Open image in new window , it must also hold that Open image in new window , and so the clause that blocks \(\alpha _d\) is propagation redundant with respect to *F* and (witness) assignment \(\omega = \alpha _L\).

Finally, observe that the filtered positive reducts \(\mathsf {f}_{\alpha _d}(F)\) and \(\mathsf {f}_{\alpha }(F)\) are not always equisatisfiable. To see this, consider the formula \(F = (\overline{x} \vee y) \wedge (x \vee \overline{y})\) and the assignments \(\alpha = x\, y\) and \(\alpha _d = x\). Clearly, the unit clause *y* is derived from Open image in new window . Now, observe that \(\mathsf {f}_{\alpha }(F)\) is satisfiable while \(\mathsf {f}_{\alpha _d}(F)\) is unsatisfiable. It thus makes sense to first compute the filtered positive reduct with respect to \(\alpha \) and then—in case it is satisfiable—remove the propagated literals to obtain a shorter clause.

## 5 The PR Reduct

We showed in the previous section that the filtered positive reduct characterizes precisely the set-propagation-redundant clauses. Since set-propagation-redundant clauses are a subset of propagation-redundant clauses [19], it is natural to search for an encoding that characterizes the propagation-redundant clauses, which could possibly lead to an even more aggressive pruning of the search space. As we will see in the following, such an encoding must necessarily be large because it has to reason over all possible clauses of a formula. We thus believe that it is hardly useful for practical SDCL solving.

The positive reduct and the filtered positive reduct yield small formulas that can be easily solved in practice. The downside, however, is that nothing can be learned from their unsatisfiability. This is different for a pruning predicate that encodes propagation redundancy:

### Theorem 2

If a clause \(l_1 \vee \dots \vee l_k\) is not propagation redundant with respect to a formula *F*, then *F* implies \(\overline{l}_1 \wedge \dots \wedge \overline{l}_k\).

### Proof

Assume \(l_1 \vee \dots \vee l_k\) is not propagation redundant with respect to *F*, or equivalently that all assignments \(\omega \) with Open image in new window agree with \(\overline{l}_1 \dots \overline{l}_k\). Then, no assignment that disagrees with \(\overline{l}_1 \dots \overline{l}_k\) can satisfy *F*. As a consequence, *F* implies \(\overline{l}_1 \wedge \dots \wedge \overline{l}_k\). \(\square \)

By solving a pruning predicate for propagation-redundant clauses, we thus not only detect if the current assignment can be pruned (in case the predicate is satisfiable), but also if the formula can only possibly be satisfied by extensions of the current assignment (in case the predicate is unsatisfiable). This is in contrast to the positive reduct and the filtered positive reduct, which often only need to consider a small subpart of the original formula. We thus believe that such an encoding is not useful in practice. In the following, we present a possible encoding which—due to the above reasons—we did not evaluate in practice. Nevertheless, performing such an evaluation is still part of our future work.

*dual-rail encoding*[8, 10, 30]. In the dual-rail encoding, a given variable

*x*is replaced by two new variables \(x^{p}\) and \(x^{n}\). The intuitive idea is that \(x^{p}\) is true whenever the original variable

*x*is supposed to be true and \(x^{n}\) is true whenever

*x*is supposed to be false. If both \(x^{p}\) and \(x^{n}\) are false, then

*x*is supposed to be unassigned. Finally, \(x^{p}\) and \(x^{n}\) cannot be true at the same time. Thus, the

*dual-rail encodings*of a clause are defined as follows: Let \(C = P \vee N\) be a clause with \(P = x_1 \vee \dots \vee x_k\) containing only positive literals and \(N = \overline{x}_{k+1} \vee \dots \vee \overline{x}_{m}\) containing only negative literals. Further, let \(x_1^{p}, x_1^{n}, \dots , x_m^{p}, x_m^{n}\) be new variables. Then, the

*positive dual-rail encoding*\(C^{p}\) of

*C*is the clause

*negative dual-rail encoding*\(C^{n}\) of

*C*is the clause

### Definition 7

*F*be a formula and \(\alpha \) an assignment. Then, the \(\mathsf {PR}\)

*reduct*\(\mathsf {pr}_{\alpha }(F)\) of

*F*and \(\alpha \) is the formula \(G \wedge C\) where

*C*is the clause that blocks \(\alpha \) and

*G*is the union of the following sets of clauses where all the \(s_i\) are new variables:

In the last set, if *L* is empty, we obtain a unit clause with the literal \(s_i\).

We thus keep all the variables assigned by \(\alpha \) but introduce the dual-rail variants for variables of *F* not assigned by \(\alpha \). The clauses of the form \(\overline{x^{p}} \vee \overline{x^{n}}\) ensure that for a variable *x*, the two variables \(x^{p}\) and \(x^{n}\) cannot be true at the same time.

*F*: from a satisfying assignment \(\tau \) of the \(\mathsf {PR}\) reduct we obtain an assignment \(\omega \) over the variables of the original formula

*F*as follows:

*F*if the \(\mathsf {PR}\) reduct of

*F*and \(\alpha \) is satisfiable, we use the following:

### Lemma 1

Let *F* be a formula and let \(\alpha \) and \(\omega \) be two assignments such that Open image in new window . Then, Open image in new window for every literal *x* such that \( var (x) \in var (\alpha )\).

### Proof

Let Open image in new window . We show that Open image in new window . Clearly, \(x \notin D\) for otherwise Open image in new window , which would imply Open image in new window . Therefore, the only possible difference between Open image in new window and Open image in new window is that \(\overline{x}\) is contained in Open image in new window but not in Open image in new window . Now, since \( var (x) \in var (\alpha )\), we know that Open image in new window . But then, Open image in new window if and only if Open image in new window and thus Open image in new window . \(\square \)

We can now show that the \(\mathsf {PR}\) reduct precisely characterizes the propagation-redundant clauses:

### Theorem 3

Let *F* be a formula, \(\alpha \) an assignment, and *C* the clause that blocks \(\alpha \). Then, *C* is propagation redundant with respect to *F* if and only if the \(\mathsf {PR}\) reduct \(\mathsf {pr}_{\alpha }(F)\) of *F* and \(\alpha \) is satisfiable.

### Proof

For the “only if” direction, assume that *C* is propagation redundant with respect to *F*, meaning that there exists an assignment \(\omega \) such that \(\omega \) satisfies *C* and Open image in new window . By Lemma 1, we can without loss of generality assume that \( var (\alpha ) \subseteq var (\omega )\). Now consider the assignment \(\tau \) that corresponds to \(\omega \) as explained before Lemma 1. We show that \(\tau \) satisfies \(\mathsf {pr}_{\alpha }(F)\). Since the clause *C* that blocks \(\alpha \) is in \(\mathsf {pr}_{\alpha }(F)\), it must be satisfied by \(\omega \). Since \(\omega \) satisfies *C*, \(\tau \) satisfies *C*. Also, by construction, \(\tau \) never satisfies both \(x^{p}\) and \(x^{n}\) for a variable *x* and so it satisfies the clauses \(\overline{x^{p}} \vee \overline{x^{n}}\). If, for a clause \(\overline{s}_i \vee \mathsf {touched}_{\alpha }(D_i) \vee \mathsf {untouched}_{\alpha }(D_i)^{p}\), \(\tau \) satisfies \(s_i\), then we know that \(\omega \) satisfies \(D_i\) and thus \(\tau \) must satisfy \(\mathsf {touched}_{\alpha }(D_i) \vee \mathsf {untouched}_{\alpha }(D_i)^{p}\).

It remains to show that \(\tau \) satisfies the clause \(\overline{L^{n}} \vee s_i\) for every \(D_i \in F\) and every set \(L \subseteq \mathsf {untouched}_{\alpha }(D_i)\) such that Open image in new window . Assume to the contrary that, for such a clause, \(\tau (s_i) = 0\) and \(\tau \) falsifies all literals in \(\overline{L^n}\). Then, \(\omega \) does not satisfy \(D_i\) and it falsifies all literals in *L*. But, from \( var (\alpha ) \subseteq var (\omega )\) we know that Open image in new window and thus it follows that Open image in new window . Hence, since Open image in new window , we conclude that Open image in new window , a contradiction.

For the “if” direction, assume that there exists a satisfying assignment \(\tau \) of \(\mathsf {pr}_{\alpha }(F)\) and consider the assignment \(\omega \) that corresponds to \(\tau \) as explained before Lemma 1. Since \(C \in \mathsf {pr}_{\alpha }(F)\), \(\omega \) must satisfy *C*. It remains to show that Open image in new window . Let Open image in new window . Then, \(\omega \) does not satisfy \(D_i\) and so \(\mathsf {touched}_{\alpha }(D_i) \vee \mathsf {untouched}_{\alpha }(D_i)^{p}\) is falsified by \(\tau \), implying that \(\tau \) must falsify \(s_i\). As \( var (\alpha ) \subseteq var (\omega )\), we know that Open image in new window , meaning that Open image in new window is of the form \(\mathsf {untouched}_{\alpha }(D_i) \setminus L\) for some set \(L \subseteq \mathsf {untouched}_{\alpha }(D_i)\) such that \(\omega \) falsifies *L*. But then the clause \(\overline{L^{n}} \vee s_i\) cannot be contained in \(\mathsf {pr}_{\alpha }(F)\) since it would be falsified by \(\tau \). We thus conclude that Open image in new window and so Open image in new window . \(\square \)

Note that the last set of clauses of the \(\mathsf {PR}\) reduct, in principle has exponentially many clauses w.r.t. the length of the largest original clause. We leave it to future work to answer the question whether non-exponential encodings exist. But even if a polynomial encoding can be found, we doubt its usefulness in practice.

## 6 Implementation

We implemented a clean-slate SDCL solver, called SaDiCaL, that can learn \(\mathsf {PR}\) clauses using either the positive reduct or the filtered positive reduct. It consists of around 3K lines of C and is based on an efficient CDCL engine using state-of-the-art algorithms, data structures, and heuristics, including a variable-move-to-front decision heuristic [4], a sophisticated restart policy [5], and aggressive clause-data-based reduction [2]. Our implementation provides a simple but efficient framework to evaluate new SDCL-inspired ideas and heuristics.

The implementation closely follows the pseudo-code shown in Fig. 1 and computes the pruning predicate before every decision. This is costly in general, but allows the solver to detect \(\mathsf {PR}\) clauses as early as possible. Our goal is to determine whether short \(\mathsf {PR}\) proofs can be found automatically. The solver produces \(\mathsf {PR}\) proofs and we verified all the presented results using proof checkers. The source code of SaDiCaL is available at http://fmv.jku.at/sadical.

Two aspects of SDCL are crucial: the pruning predicate and the decision heuristics. For the pruning predicate we ran experiments with both the positive reduct and the filtered positive reduct. The initially proposed decision heuristics for SDCL [20] are as follows: Pick the variable that occurs most frequently in short clauses. Also, apart from the root-node branch, assign only literals that occur in clauses that are touched but not satisfied by the current assignment.

We added another restriction: whenever a (filtered) positive reduct is satisfiable, make all literals in the witness (i.e., the satisfying assignment of the pruning predicate) that disagree with the current assignment more important than any other literal in the formula. This restriction is removed when the solver backtracks to the root node (i.e., when a unit clause is learned) and added again when a new \(\mathsf {PR}\) clause is found. The motivation of this restriction is as follows: we observed that literals in the witness that disagree with the current assignment typically occur in short \(\mathsf {PR}\) clauses; making them more important than other literals increases the likelihood of learning short \(\mathsf {PR}\) clauses.

## 7 Evaluation

*no*as the sum of all degrees is always even. The formula is therefore unsatisfiable by construction. Tseitin formulas defined over expander graphs require resolution proofs of exponential size [33] and also appear hard for SDCL when using the ordinary positive reduct as pruning predicate. We compare three settings, all with proof logging:

- (1)
plain CDCL,

- (2)
SDCL with the positive reduct \(\mathsf {p}_{\alpha }(F)\), and

- (3)
SDCL with the filtered positive reduct \(\mathsf {f}_{\alpha }(F)\).

Additionally, we include the winner of the 2018 SAT Competition: the CDCL-based solver MapleLCMDistChronoBT (short MLBT) [29]. The results are shown in Table 1. The last column shows the proof-validation times by the formally verified checker in ACL2. To verify the proofs for all our experiments, we did the following: We started with the \(\mathsf {PR}\) proofs produced by our SDCL solver using the filtered positive reduct. We then translated them into \(\mathsf {DRAT}\) proofs using the pr2drat tool [17]. Finally, we used the drat-trim checker to optimize the proofs (i.e., to remove redundant proof parts) and to convert them into the \(\mathsf {LRAT}\) format, which is the format supported by the formally verified proof checker.

^{1}Only our solver with the filtered positive reduct is able to efficiently prove unsatisfiability of all these instances. Notice that with the ordinary positive reduct it is impossible to solve any of the formulas. There may actually be a theoretical barrier here. The LSDCL solver also uses the positive reduct, but only for assignments with at most two decision literals. As a consequence, the overhead of the positive reduct is small. In the future we plan to develop meaningful limits for SaDiCaL as well.

| MLBT [29] | LSDCL [20] | plain | \(\mathsf {p}_{\alpha }(F)\) | \(\mathsf {f}_{\alpha }(F)\) | ACL2 |
---|---|---|---|---|---|---|

Urquhart-s3-b1 | 2.95 | 5.86 | 16.31 | \(> 3600\) | \(\mathbf{0.02}\) | 0.09 |

Urquhart-s3-b2 | 1.36 | 2.4 | 2.82 | \(> 3600\) | \(\mathbf{0.03}\) | 0.13 |

Urquhart-s3-b3 | 2.28 | 19.94 | 2.08 | \(> 3600\) | \(\mathbf{0.03}\) | 0.16 |

Urquhart-s3-b4 | 10.74 | 32.42 | 7.65 | \(> 3600\) | \(\mathbf{0.03}\) | 0.17 |

Urquhart-s4-b1 | 86.11 | 583.96 | \(> 3600\) | \(> 3600\) | \(\mathbf{0.32}\) | 2.37 |

Urquhart-s4-b2 | 154.35 | 1824.95 | 183.77 | \(> 3600\) | \(\mathbf{0.11}\) | 0.78 |

Urquhart-s4-b3 | 258.46 | \(> 3600\) | 129.27 | \(> 3600\) | \(\mathbf{0.16}\) | 1.12 |

Urquhart-s4-b4 | \(> 3600\) | \(> 3600\) | \(> 3600\) | \(> 3600\) | \(\mathbf{0.14}\) | 1.17 |

Urquhart-s5-b1 | \(> 3600\) | \(> 3600\) | \(> 3600\) | \(> 3600\) | \(\mathbf{1.27}\) | 9.86 |

Urquhart-s5-b2 | \(> 3600\) | \(> 3600\) | \(> 3600\) | \(> 3600\) | \(\mathbf{0.58}\) | 4.38 |

Urquhart-s5-b3 | \(> 3600\) | \(> 3600\) | \(> 3600\) | \(> 3600\) | \(\mathbf{1.67}\) | 17.99 |

Urquhart-s5-b4 | \(> 3600\) | \(> 3600\) | \(> 3600\) | \(> 3600\) | \(\mathbf{2.91}\) | 24.24 |

Runtime Comparison (in Seconds) on the Pigeon-Hole Formulas.

| MLBT [29] | LSDCL [20] | plain | \(\mathsf {p}_{\alpha }(F)\) | \(\mathsf {f}_{\alpha }(F)\) | ACL2 |
---|---|---|---|---|---|---|

hole20 | \(> 3600\) | 1.13 | \(> 3600\) | \(\mathbf{0.22}\) | 0.55 | 6.78 |

hole30 | \(> 3600\) | 8.81 | \(> 3600\) | \(\mathbf{1.71}\) | 4.30 | 87.58 |

hole40 | \(> 3600\) | 43.10 | \(> 3600\) | \(\mathbf{7.94}\) | 20.38 | 611.24 |

hole50 | \(> 3600\) | 149.67 | \(> 3600\) | \(\mathbf{25.60}\) | 68.46 | 2792.39 |

We also ran experiments with the pigeon-hole formulas. Although these formulas are hard for resolution, they can be solved efficiently with SDCL using the positive reduct [20]. Table 2 shows a runtime comparison, again including \(\mathsf {PR}\) proof logging, for pigeon-hole formulas of various sizes. Notice that the computational costs of the solver with the filtered positive reduct are about 3 to 4 times as large compared to the solver with the positive reduct. This is caused by the overhead of computing the filtering. The sizes of the \(\mathsf {PR}\) proofs produced by both versions are similar. Our solver with the positive reduct is about four times as fast compared to the SDCL version (only positive reduct) of Lingeling [20], in short LSDCL. As the heuristics and proof sizes of our solver and LSDCL are similar, the better performance is due to our dedicated SDCL implementation.

Finally, we performed experiments with the recently released 2018 SAT Competition benchmarks. We expected slow performance on most benchmarks due to the high overhead of solving pruning predicates before making decisions. However, our solver outperformed the participating solvers on mutilated chessboard problems [27] which were contributed by Alexey Porkhunov (see Table 3).

Runtime Comparison (in Seconds) on the Mutilated Chessboard Formulas.

| MLBT [29] | LSDCL [20] | plain | \(\mathsf {p}_{\alpha }(F)\) | \(\mathsf {f}_{\alpha }(F)\) | ACL2 |
---|---|---|---|---|---|---|

mchess_15 | 51.53 | 1473.11 | 2480.67 | \({> 3600}\) | | 29.12 |

mchess_16 | 380.45 | \(> 3600\) | 2115.75 | \({> 3600}\) | | 36.86 |

mchess_17 | 2418.35 | \(> 3600\) | \(> 3600\) | \({ > 3600}\) | | 57.83 |

mchess_18 | \(> 3600\) | \(> 3600\) | \(> 3600\) | \({ > 3600}\) | | 100.71 |

## 8 Conclusion

We introduced two new SAT encodings for pruning the search space in satisfaction-driven clause learning (SDCL). The first encoding, called the filtered positive reduct, is easily solvable and prunes the search space more aggressively than the positive reduct (which was used when SDCL was initially introduced). The second encoding, called the \(\mathsf {PR}\) reduct, might not be useful in practice though it precisely characterizes propagation redundancy.

Based on the filtered positive reduct, we implemented an SDCL solver and our experiments show that the solver can efficiently prove the unsatisfiability of the Tseitin formulas, the pigeon-hole formulas, and the mutilated chessboard problems. For all these formulas, CDCL solvers require exponential time due to theoretical restrictions. Moreover, to the best of our knowledge, our solver is the first to generate machine-checkable proofs of unsatisfiability of these formulas. We certified our results using a formally verified proof checker.

Although our SDCL solver can already produce proofs of formulas that are too hard for CDCL solvers, it is still outperformed by CDCL solvers on many simpler formulas. This seems to suggest that also in SAT solving, there is no free lunch. Nevertheless, we believe that the performance of SDCL on simple formulas can be improved by tuning the solver more carefully, e.g., by only learning propagation-redundant clauses when this is really beneficial, or by coming up with a dedicated decision heuristic. To deal with these problems, we are currently investigating an approach based on reinforcement learning.

Considering our results, we believe that SDCL is a promising SAT-solving paradigm for formulas that are too hard for ordinary CDCL solvers. Finally, proofs of challenging problems can be enormous in size, such as the 2 petabytes proof of Schur Number Five [16]; SDCL improvements have the potential to produce proofs that are substantially smaller and faster to verify.

## Footnotes

- 1.
Log files, benchmarks and source code are available at http://fmv.jku.at/sadical.

## References

- 1.Alekhnovich, M.: Mutilated chessboard problem is exponentially hard for resolution. Theoret. Comput. Sci.
**310**(1–3), 513–525 (2004)MathSciNetCrossRefGoogle Scholar - 2.Audemard, G., Simon, L.: Predicting learnt clauses quality in modern SAT solvers. In: Proceedings of the 21st International Joint Conference on Artificial Intelligence (IJCAI 2019), pp. 399–404 (2009)Google Scholar
- 3.Biere, A.: Splatz, Lingeling, Plingeling, Treengeling, YalSAT entering the SAT competition 2016. In: Proceedings of SAT Competition 2016 - Solver and Benchmark Descriptions. Department of Computer Science Series of Publications B, vol. B-2016-1, pp. 44–45. University of Helsinki (2016)Google Scholar
- 4.Biere, A., Fröhlich, A.: Evaluating CDCL variable scoring schemes. In: Heule, M.J.H., Weaver, S. (eds.) SAT 2015. LNCS, vol. 9340, pp. 405–422. Springer, Cham (2015)Google Scholar
- 5.Biere, A., Fröhlich, A.: Evaluating CDCL restart schemes. In: Proceedings of the 6th Pragmatics of SAT Workshop (PoS 2015). EPiC Series in Computing, vol. 59, pp. 1–17 (2019)Google Scholar
- 6.Biere, A., Heule, M.J.H.: The effect of scrambling CNFs. In: Proceedings of the 9th Pragmatics of SAT Workshop (PoS 2018) (2018, to be published)Google Scholar
- 7.Biere, A., Heule, M.J.H., van Maaren, H., Walsh, T. (eds.): Handbook of Satisfiability. IOS Press, Amsterdam (2009)zbMATHGoogle Scholar
- 8.Bonet, M.L., Buss, S., Ignatiev, A., Marques-Silva, J., Morgado, A.: MaxSAT resolution with the dual rail encoding. In: Proceedings of the 32nd AAAI Conference on Artificial Intelligence (AAAI 2018). AAAI Press (2018)Google Scholar
- 9.Brickenstein, M., Dreyer, A.: PolyBoRi: a framework for Gröbner-basis computations with boolean polynomials. J. Symbolic Comput.
**44**(9), 1326–1345 (2009). Effective Methods in Algebraic GeometryMathSciNetCrossRefGoogle Scholar - 10.Bryant, R.E., Beatty, D., Brace, K., Cho, K., Sheffler, T.: COSMOS: a compiled simulator for MOS circuits. In: Proceedings of the 24th ACM/IEEE Design Automation Conference (DAC 87), pp. 9–16. ACM (1987)Google Scholar
- 11.Chatalic, P., Simon, L.: Multi-resolution on compressed sets of clauses. In: Proceedings of the 12th IEEE International Conference on Tools with Artificial Intelligence (ICTAI 2000), pp. 2–10 (2000)Google Scholar
- 12.Cruz-Filipe, L., Marques-Silva, J., Schneider-Kamp, P.: Efficient certified resolution proof checking. In: Legay, A., Margaria, T. (eds.) TACAS 2017. LNCS, vol. 10205, pp. 118–135. Springer, Heidelberg (2017)CrossRefGoogle Scholar
- 13.Dantchev, S.S., Riis, S.: “Planar” tautologies hard for resolution. In: Proceedings of the 42nd Annual Symposium on Foundations of Computer Science (FOCS 2001), pp. 220–229. IEEE Computer Society (2001)Google Scholar
- 14.Devriendt, J., Bogaerts, B., Bruynooghe, M., Denecker, M.: Improved static symmetry breaking for SAT. In: Creignou, N., Le Berre, D. (eds.) SAT 2016. LNCS, vol. 9710, pp. 104–122. Springer, Cham (2016)CrossRefGoogle Scholar
- 15.Haken, A.: The intractability of resolution. Theoret. Comput. Sci.
**39**, 297–308 (1985)MathSciNetCrossRefGoogle Scholar - 16.Heule, M.J.H.: Schur number five. In: Proceedings of the 32nd AAAI Conference on Artificial Intelligence (AAAI 2018). AAAI Press (2018)Google Scholar
- 17.Heule, M.J.H., Biere, A.: What a difference a variable makes. In: Beyer, D., Huisman, M. (eds.) TACAS 2018. LNCS, vol. 10806, pp. 75–92. Springer, Cham (2018)CrossRefGoogle Scholar
- 18.Heule, M.J.H., Hunt Jr., W.A., Kaufmann, M., Wetzler, N.: Efficient, verified checking of propositional proofs. In: Ayala-Rincón, M., Muñoz, C.A. (eds.) ITP 2017. LNCS, vol. 10499, pp. 269–284. Springer, Cham (2017)Google Scholar
- 19.Heule, M.J.H., Kiesl, B., Biere, A.: Short proofs without new variables. In: de Moura, L. (ed.) CADE 2017. LNCS (LNAI), vol. 10395, pp. 130–147. Springer, Cham (2017)CrossRefGoogle Scholar
- 20.Heule, M.J.H., Kiesl, B., Seidl, M., Biere, A.: PRuning through satisfaction. In: Strichman, O., Tzoref-Brill, R. (eds.) HVC 2017. LNCS, vol. 10629, pp. 179–194. Springer, Cham (2017)CrossRefGoogle Scholar
- 21.Heule, M.J.H., van Maaren, H.: Aligning CNF- and equivalence-reasoning. In: Hoos, H.H., Mitchell, D.G. (eds.) SAT 2004. LNCS, vol. 3542, pp. 145–156. Springer, Heidelberg (2005)zbMATHGoogle Scholar
- 22.Järvisalo, M., Heule, M.J.H., Biere, A.: Inprocessing rules. In: Gramlich, B., Miller, D., Sattler, U. (eds.) IJCAR 2012. LNCS (LNAI), vol. 7364, pp. 355–370. Springer, Heidelberg (2012)CrossRefGoogle Scholar
- 23.Kiesl, B., Seidl, M., Tompits, H., Biere, A.: Super-blocked clauses. In: Olivetti, N., Tiwari, A. (eds.) IJCAR 2016. LNCS (LNAI), vol. 9706, pp. 45–61. Springer, Cham (2016)CrossRefGoogle Scholar
- 24.Kullmann, O.: On a generalization of extended resolution. Discrete Appl. Math.
**96–97**, 149–176 (1999)MathSciNetCrossRefGoogle Scholar - 25.Marques-Silva, J., Lynce, I., Malik, S.: Conflict-driven clause learning SAT solvers. In: Biere, A., Heule, M.J.H., van Maaren, H., Walsh, T. (eds.) Handbook of Satisfiability, pp. 131–153. IOS Press, Amsterdam (2009)Google Scholar
- 26.Marques Silva, J.P., Sakallah, K.A.: GRASP: a search algorithm for propositional satisfiability. IEEE Trans. Comput.
**48**(5), 506–521 (1999)MathSciNetCrossRefGoogle Scholar - 27.McCarthy, J.: A tough nut for proof procedures. Memo 16, Stanford Artificial Intelligence Project, July 1964Google Scholar
- 28.Moskewicz, M.W., Madigan, C.F., Zhao, Y., Zhang, L., Malik, S.: Chaff: engineering an efficient SAT solver. In: Proceedings of the 38th Design Automation Conference (DAC 2001), pp. 530–535. ACM (2001)Google Scholar
- 29.Nadel, A., Ryvchin, V.: Chronological backtracking. In: Beyersdorff, O., Wintersteiger, C.M. (eds.) SAT 2018. LNCS, vol. 10929, pp. 111–121. Springer, Cham (2018)CrossRefGoogle Scholar
- 30.Palopoli, L., Pirri, F., Pizzuti, C.: Algorithms for selective enumeration of prime implicants. Artif. Intell.
**111**(1), 41–72 (1999)MathSciNetCrossRefGoogle Scholar - 31.Soos, M., Nohl, K., Castelluccia, C.: Extending SAT solvers to cryptographic problems. In: Kullmann, O. (ed.) SAT 2009. LNCS, vol. 5584, pp. 244–257. Springer, Heidelberg (2009)CrossRefGoogle Scholar
- 32.Tseitin, G.S.: On the complexity of derivation in propositional calculus. In: Siekmann, J.H., Wrightson, G. (eds.) Automation of Reasoning: 2: Classical Papers on Computational Logic 1967–1970, pp. 466–483. Springer, Heidelberg (1983)CrossRefGoogle Scholar
- 33.Urquhart, A.: Hard examples for resolution. J. ACM
**34**(1), 209–219 (1987)MathSciNetCrossRefGoogle Scholar

## Copyright information

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.