Minimal Distance of Propositional Models

We investigate the complexity of three optimization problems in Boolean propositional logic related to information theory: Given a conjunctive formula over a set of relations, find a satisfying assignment with minimal Hamming distance to a given assignment that satisfies the formula (NearestOtherSolution, NOSol) or that does not need to satisfy it (NearestSolution, NSol). The third problem asks for two satisfying assignments with a minimal Hamming distance among all such assignments (MinSolutionDistance, MSD). For all three problems we give complete classifications with respect to the relations admitted in the formula. We give polynomial time algorithms for several classes of constraint languages. For all other cases we prove hardness or completeness regarding APX, poly-APX, or equivalence to well-known hard optimization problems.


Introduction
We investigate the solution spaces of Boolean constraint satisfaction problems built from atomic constraints by means of conjunction and variable identification. We study three minimization problems in connection with Hamming distance: Given an instance of a constraint satisfaction problem in the form of a generalized conjunctive formula over a set of atomic constraints, the first problem asks to find a satisfying assignment with minimal Hamming distance to a given assignment (NearestSolution, NSol).
Note that for this problem we assume neither that the given assignment satisfies the formula nor that the solution is different from the assignment. The second problem is similar to the first one, but this time the given assignment has to satisfy the formula and we look for another solution with minimal Hamming distance (NearestOtherSolution, NOSol). The third problem is to find two satisfying assignments with minimal Hamming distance among all satisfying assignments (MinSolutionDistance, MSD). Note that the dual problem MaxHammingDistance has been studied in [14].
The NSol problem appears in several guises throughout literature. E.g., a common problem in Artificial Intelligence is to find solutions of constraints close to an initial configuration; our problem is an abstraction of this setting for the Boolean domain. Bailleux and Marquis [4] describe such applications in detail and introduce the decision problem DistanceSAT: Given a propositional formula ϕ, a partial interpretation I, and a bound k, is there a satisfying assignment differing from I in no more than k variables? It is straightforward to show that DistanceSAT corresponds to the decision variant of our problem with existential quantification (called NSol d pp later on). While [4] investigates the complexity of DistanceSAT for a few relevant classes of formulas and empirically evaluates two algorithms, we analyze the decision and the optimization problem for arbitrary semantic restrictions on the formulas.
Hamming distance also plays an important role in belief revision. The result of revising/updating a formula ϕ by another formula ψ is characterized by the set of models of ψ that are closest to the models of ϕ. Dalal [15] selects the models of ψ having a minimal Hamming distance to models of ϕ to be the models that result from the change.
As is common, we analyze the complexity of our optimization problems modulo a parameter that specifies the atomic constraints allowed to occur in the constraint satisfaction problem. We give a complete classification of the approximation complexity with respect to this parameterization. It turns out that our problems can either be solved in polynomial time, or they are complete for a well-known optimization class, or else they are equivalent to well-known hard optimization problems.
Our study can be understood as a continuation of the minimization problems investigated by Khanna et al. in [22], especially that of MinOnes. The MinOnes optimization problem asks for a solution of a constraint satisfaction problem with the minimal Hamming weight, i.e., minimal Hamming distance to the 0-vector. Our work generalizes these results by allowing the given vector to be also different from zero.
Our work can also be seen as a generalization of questions in coding theory. In fact, our problem MSD restricted to affine relations is the well-known problem MinDistance of computing the minimum distance of a linear code. This quantity is of central importance in coding theory, because it determines the number of errors that the code can detect and correct. Moreover, our problem NSol restricted to affine relations is the problem NearestCodeword of finding the nearest codeword to a given word, which is the basic operation when decoding messages received through a noisy channel. Thus our work can be seen as a generalization of these well-known problems from affine to general relations.
In the case of NearestSolution we are able to apply methods from clone theory, even though the problem turns out to be more intricate than pure satisfiability. The other two problems, however, cannot be shown to be compatible with existential quantification easily, which makes classical clone theory inapplicable. Therefore we have to resort to weak co-clones that require only closure under conjunction and equality. In this connection, we apply the theory developed in [28,29] as well as the minimal weak bases of Boolean co-clones from [23]. This paper is structured as follows. Section 2 recalls basic definitions and notions. Section 3 introduces the trilogy of optimization problems studied in this paper, namely Nearest Solution (denoted by NSol), Nearest Other Solution (denoted by NOSol), and Minimum Solution Distance (denoted by MSD), as well as their decision versions. It also states our three main results, i.e., a complete classification of complexity for these optimization problems, depicted in Figures 1, 2, and 3. Section 4 investigates the (non-)applicability of clone theory to our problems. It also provides a duality result for the constraint languages used as parameters. Section 5 contains the proofs of complexity classification results for x ⊕ y = x + y (mod 2) or k = {0, 1} k {0 · · · 0} for k ≥ 1 even k k = = {(a 1 , . . . , a 2k ) ∈ {0, 1} 2k | even k (a 1 , . . . , a k ) ∧ k i=1 (a k+i ≈ ¬a i )} NearestSolution, Section 6 for NearestOtherSolution, and Section 7 for MinSolutionDistance. Finally, the concluding remarks in Section 8 compare our theorems to previously existing similar results and put our results into perspective.

Boolean Relations and Relational Clones
An n-ary Boolean relation R is a subset of {0, 1} n ; its elements (b 1 , . . . , b n ) are also written as b 1 · · · b n . Let V be a set of variables. An atomic constraint, or an atom, is an expression R(x), where R is an n-ary relation and x is an n-tuple of variables from V . Let Γ be a non-empty finite set of Boolean relations, also called a constraint language. A (conjunctive) Γ-formula is a finite conjunction of atoms R 1 (x 1 ) ∧ · · · ∧ R k (x k ), where the R i are relations from Γ and the x i are variable tuples of suitable arity. For technical reasons in connection with reductions we also allow empty conjunctions (k = 0) here. Such formulas elegantly take care of certain marginal cases at the cost of adding only one additional trivial problem instance. An assignment is a mapping m : V → {0, 1} assigning a Boolean value m(x) to each variable x ∈ V . In a given context we can assume V to be finite, by restricting it e.g. to the variables occurring in a formula. If we impose an arbitrary but fixed order on the variables, say x 1 , . . . , x n , then the assignments can be identified with elements from {0, An assignment m satisfies a constraint R(x 1 , . . . , x n ) if (m(x 1 ), . . . , m(x n )) ∈ R holds. It satisfies the formula ϕ if it satisfies all its atoms; m is said to be a model or solution of ϕ in this case. We use [ϕ] to denote the set of models of ϕ. For a term t, [t] is the set of assignments for which t evaluates to 1. Note that [ϕ] and [t] represent Boolean relations. If the variables of ϕ are not explicitly enumerated in parentheses as parameters, they are implicitly considered to be ordered lexicographically. In sets of relations represented this way we usually omit the brackets. A literal is a variable v, or its negation ¬v. Assignments are extended to literals by defining m(¬v) = 1 − m(v). Table 1 defines Boolean functions and relations needed later on, in particular exclusive or [x ⊕ y], not-all-equal nae 3 , k-ary disjunction or k , and k-ary negated conjunction nand k .
Throughout the text we refer to different types of Boolean constraint relations following Schaefer's terminology [27] (see also the monograph [11] and the survey [9]). A Boolean relation R is (1) 1-valid if 1 · · · 1 ∈ R and 0-valid if 0 · · · 0 ∈ R, (2) Horn (dual Horn) if R can be represented by a formula in conjunctive normal form (CNF) with at most one unnegated (negated) variable per clause, (3) monotone if it is both Horn and dual Horn, (4) bijunctive if it can be represented by a CNF formula with at most two literals per clause, (5) affine if it can be represented by an affine system of equations Ax = b over Z 2 , (6) complementive if for each m ∈ R also m ∈ R, (7) implicative hitting set-bounded+ with bound k (denoted by k -IHS-B + ) if R can be represented by a CNF formula with clauses of the form (x 1 ∨ · · · ∨ x k ), (¬x ∨ y), x, and ¬x, (8) implicative hitting set-bounded− with bound k (denoted by k -IHS-B − ) if R can be represented by a CNF formula with clauses of the form (¬x 1 ∨ · · · ∨ ¬x k ), (¬x ∨ y), x, and ¬x. A set Γ of Boolean relations is called 0-valid (1-valid, Horn, dual Horn, monotone, affine, bijunctive, complementive, k -IHS-B + , k -IHS-B − ) if every relation in Γ is 0-valid (1-valid, Horn, dual Horn, monotone, affine, bijunctive, complementive, k -IHS-B + , k -IHS-B − ).
A formula constructed from atoms by conjunction, variable identification, and existential quantification is called a primitive positive formula (pp-formula). If ϕ is such a formula, we write again [ϕ] for its set of models, i.e., the Boolean relation defined by ϕ. As above the coordinates of this relation are understood to be the variables of ϕ in lexicographic order, unless otherwise stated by explicit enumeration. We denote by Γ the set of all relations that can be expressed using relations from Γ∪{≈}, conjunction, variable identification (and permutation), cylindrification, and existential quantification, i.e., the set of all relations that are primitive positively definable from Γ and equality. The set Γ is called the co-clone generated by Γ. A base of a co-clone B is a set of relations Γ such that Γ = B, i.e., just a generating set with regard to primitive positive definability including equality. Note that traditionally (e.g. [18]), the notion of base also involves minimality with respect to set inclusion. Our use of the term base is in accordance with [10], where finite bases for all Boolean co-clones have been determined. Some of these are listed in Table 2. The sets of relations being 0-valid, 1-valid, complementive, Horn, dual Horn, affine, bijunctive, 2affine (both bijunctive and affine), monotone, k -IHS-B + , and k -IHS-B − each form a co-clone denoted by iI 0 , iI 1 , iN 2 , iE 2 , iV 2 , iL 2 , iD 2 , iD 1 , iM 2 , iS k 00 , and iS k 10 , respectively; see Table 3. We will also use a weaker closure than Γ , called conjunctive closure and denoted by Γ ∧ , where the constraint language Γ is closed under conjunctive definitions, but not under existential quantification or addition of explicit equality constraints. Sets of relations of the form W = W ∪ {≈} ∧ are called weak systems and are in a one-to-one correspondence with so-called strong partial clones [26]. It is a well-known consequence of the Galois theory developed in [26] that for every co-clone Γ ′ whose corresponding clone is finitely generated (this presents no restriction in the Boolean case), there is a largest partial clone whose total part coincides with that clone, cf. [24,Theorem 20.7.2] or see [28,Theorems 4.6,4.7,4.11] for a proof in the Boolean case. This largest partial clone even is a strong partial clone, and hence, there is a least weak system W under inclusion such that W = Γ ′ . Any finite weak generating set Γ of this weak system W , i.e., W = Γ ∪ {≈} ∧ , is called a weak base of Γ ′ , see [28,Definition 4.2]. Such a set Γ, in particular, is a finite base of the co-clone Γ ′ . Finally, to get from the closure operator Γ ∪ {≈} ∧ (which is hard to handle in the context of our problems) to Γ ∧ (which is easy to handle), one needs the notion of irredundancy. A relation R is called irredundant, if it has neither duplicate nor fictitious coordinates. It can be observed from the proofs of Proposition 5.2 and Corollary 5.6 in [28] or from [29,Proposition 3.11]   According to Lagerkvist [23], a minimal weak base is an irredundant weak base satisfying an additional minimality property that ensures small cardinality. The utility of Theorem 1 comes in particular from the fact that Lagerkvist determined minimal weak bases for all finitely generated Boolean coclones in [23]. For our purposes we note that each of the co-clones iV, iV 0 , iV 1 , iV 2 , iN, iN 2 , and iI is generated by a minimal weak base consisting of a single relation (Table 4).
Another source of weak base relations without duplicate coordinates comes from the following construction: let χ n be the 2 n -ary relation that is given by the value tables (in some chosen enumeration) of the n distinct n-ary projection functions. More formally, let β : 2 n → {0, 1} n be the reader's preferred bijection between the index set 2 n = {0, . . . , 2 n−1 } and the set of all arguments of an n-ary Boolean function -often lexicographic enumeration is chosen here for presentational purposes, but the order of enumeration of the n-tuples does not matter as long as it remains fixed. Then 1} denotes the projection function onto the i-th coordinate. Let C be a clone with corresponding co-clone iC. Since iC is closed with respect to intersection of relations of identical arity, for any k-ary relation R, there is a least k-ary relation in iC containing R, scilicet C • R := {R ′ ∈ iC | R ′ ⊇ R, R ′ k-ary}. Traditionally, e.g. [24,Sect. 2.8,p. 134] or [25,Definition 1.1.16,p. 48], this relation is denoted by Γ C (R), but here we have chosen a different notation to avoid confusion with constraint languages. It is well known, e.g. [25, Satz 1.1.19(i), p. 50], and easy to see that C • R is completely determined by the ℓ-ary part of C whenever ℓ ≥ |R|: given any enumeration of ∅ = R = {r 1 , . . . , r ℓ } (for technical reasons we have to exclude the case ℓ = 0 in this presentation because we do not consider clones with nullary operations here) we have C • R = {f •(r 1 , . . . , r ℓ ) | f ∈ C, f ℓ-ary}, where f •(r 1 , . . . , r ℓ ) denotes the row-wise application of f to a matrix whose columns are formed by the tuples r 1 , . . . , r ℓ . Relations of the form C • χ n represent the n-ary part of the clone C as a 2 n -ary relation and are called n-th graphic of C (cf. e.g. [24, p. 133 and Theorem 2.8.1(b)]). Indeed, the previous characterization of C • χ n yields With the help of this description of C • χ n and standard clone theoretic manipulations, one can easily verify the following result, identifying possible candidates for irredundant singleton weak bases.

Approximability, Reductions, and Completeness
We assume that the reader has a basic knowledge of approximation algorithms and complexity theory. We recall some basic notions of approximation algorithms and complexity theory; for details see the monographs [3,11].
A combinatorial optimization problem P is a quadruple (I, sol, obj, goal), where: • I is the set of admissible instances of P.
• sol(x) denotes the set of feasible solutions for every instance x ∈ I.
• obj(x, y) denotes the non-negative integer measure of y for every instance x ∈ I and every feasible solution y ∈ sol(x); obj is also called objective function. • goal ∈ {min, max} denotes the optimization goal for P. A combinatorial optimization problem is said to be an NP-optimization problem (NPO-problem) if • the instances and solutions are recognizable in polynomial time, • the size of the solutions in sol(x) is polynomially bounded in the size of x, and • the objective function obj is computable in polynomial time. The optimal value of the objective function for the solutions of an instance x is denoted by OPT(x). In our case the optimization goal will always be minimization, i.e., OPT(x) will be the minimum.
Given an instance x ∈ I with a feasible solution y ∈ sol(x) and a real number r ≥ 1, we say that y is r-approximate if obj(x, y) ≤ r OPT(x) holds and our goal is minimization, or obj(x, y) ≥ OPT(x)/r and we consider a maximization problem.
Let A be an algorithm that for any instance x of P such that sol(x) = ∅ returns a feasible solution A(x) ∈ sol(x). Given an arbitrary function r : N → [1, ∞), we say that A is an r(n)-approximate algorithm for P if for any instance x ∈ I having feasible solutions the algorithm returns an r(|x|)approximate solution, where |x| is the size of x. If an NPO problem P admits an r(n)-approximate polynomial-time algorithm, we say that P is approximable within r(n).
An NPO problem P is in the class PO if the optimum is computable in polynomial time (i.e. if P admits a 1-approximate polynomial-time algorithm). P is in the class APX (poly-APX) if it is approximable within a constant (polynomial) function in the size of the instance x. NPO is the class of all NPO problems and NPOPB is the class of all NPO problems where the objective function is polynomially bounded. The following inclusions hold for these approximation complexity classes: PO ⊆ APX ⊆ poly-APX ⊆ NPO. All inclusions are strict unless P = NP.
For reductions among decision problems we use the polynomial-time many-one reduction denoted by ≤ m . Many-one equivalence between decision problems is denoted by ≡ m . For reductions among optimization problems we use approximation preserving reductions, also called AP-reductions, denoted by ≤ AP . AP-equivalence between optimization problems is denoted by ≡ AP .
We say that an optimization problem P AP-reduces to another optimization problem Q, denoted P ≤ AP Q, if there are two polynomial-time computable functions f and g and a real constant α ≥ 1 such that for all r > 1 and all P-instances x the following conditions hold.
• f (x) is a Q-instance or the generic unsolvable instance ⊥ (which is not part of Q).
• If x admits feasible solutions, then f (x) is different from ⊥ and also admits feasible solutions.
• For any feasible solution y ′ of f (x), g(x, y ′ ) is a feasible solution of x.
• If y ′ is an r-approximate solution of the Q-instance f (x), then g(x, y ′ ) is an (1 + (r − 1)α + o(1))approximate solution of the P-instance x, where o(1) refers to the size of x. Our definition of AP-reducibility slightly extends the one in [3] by introducing a generic unsolvable instance ⊥. This extension allows us to reduce problems with unsolvable instances to such without as long as the unsolvable instances can be detected in polynomial time, by making f map the unsolvable instances to ⊥. This practice has been implicit in previous work, e.g. [22].
We also need a slightly non-standard variation of AP-reductions. We say that an optimization problem P AP-Turing-reduces to another optimization problem Q if there is a polynomial-time oracle algorithm A and a constant α ≥ 1 such that for all r > 1 on any input x for P • if all oracle calls with a Q-instance x ′ are answered with a feasible Q-solution y for x ′ , then A outputs a feasible P-solution for x, and • if for every call the oracle answers with an r-approximate solution, then A computes a (1+(r −1)α+ o(1))-approximate solution for the P-instance x. It is straightforward to check that AP-Turing-reductions are transitive. Moreover, if P AP-Turingreduces to Q with constant α and Q has an r(n)-approximation algorithm, then there is an αr(n)approximation algorithm for P.
We will relate our problems to well-known optimization problems, by calling the problem P under investigation Q-complete if P ≡ AP Q. This notion of completeness is stricter than the one in [22], since the latter relies on A-reductions. For Q, we will consider the following optimization problems analyzed in [22].
Problem WeightedMinOnes(Γ) Input: A conjunctive formula ϕ over relations from Γ and a weight function w : V → N assigning nonnegative integer weights to the variables of ϕ. Solution: An assignment m satisfying ϕ. Objective: Minimum value x:m(x)=1 w(x).
We now define some well-studied problems to which we will relate our problems. Note that these problems do not depend on any parameter.

Problem NearestCodeword
Input: A matrix A ∈ Z k×l 2 and a vector m ∈ Z l 2 . Solution: A vector x ∈ Z k 2 . Objective: Minimum Hamming distance hd(xA, m).

Problem MinDistance
Input: A matrix A ∈ Z k×l 2 . Solution: A non-zero vector x ∈ Z l 2 with Ax = 0. Objective: Minimum Hamming weight hw(x).

Problem MinHornDeletion
Input: A conjunctive formula ϕ over relations from {x ∨ y ∨ ¬z, x, ¬x}. Solution: An assignment m to ϕ. Objective: Minimum number of unsatisfied conjuncts of ϕ.
NearestCodeword, MinDistance and MinHornDeletion are known to be NP-hard to approximate within a factor 2 Ω(log 1−ε (n)) for every ε > 0 [1, 16,22]. Thus if a problem P is equivalent to any of these problems, it follows that P / ∈ APX unless P = NP.

Satisfiability
We also use the classic problem SAT(Γ) asking for the satisfiability of a given conjunctive formula over a constraint language Γ. Schaefer [27] completely classified its complexity.
, or affine (Γ ⊆ iL 2 ); otherwise it is NP-complete. Moreover, we need the decision problem AnotherSAT(Γ): Given a conjunctive formula over Γ and a satisfying assignment m, is there another satisfying assignment m ′ different from m? The complexity of this problem was completely classified by Juban [20].

Linear and Integer Programming
A unimodular matrix is a square integer matrix having determinant +1 or −1. A totally unimodular matrix is a matrix for which every square non-singular submatrix is unimodular. A totally unimodular matrix need not be square itself. Any totally unimodular matrix has only 0, +1 or −1 entries. If A is a totally unimodular matrix and b is an integral vector, then for any given linear functional f such that the linear program min{f (x) | Ax ≥ b} has a real minimum x, it also has an integral minimum point x.
That is, the feasible region {x | Ax ≥ b} is an integral polyhedron. For this reason, linear programming methods can be used to obtain the solutions for integer linear programs in this case. Linear programs can be solved in polynomial time, hence so can integer programs with totally unimodular matrices. For details see the monograph by Schrijver [30].

Results
This section presents the problems we consider and our results; the proofs follow in subsequent sections. The input to all our problems is a conjunctive formula over a constraint language. The satisfying assignments of the formula, i.e. its models or solutions, form a Boolean relation that can be understood as an associated generalized binary code. As for linear codes, the minimization target is always the Hamming distance between the codewords or models. Our three problems differ in the information additionally available for computing the required Hamming distance. Given a formula and an arbitrary assignment, the first problem asks for a solution closest to the given assignment.
Problem NearestSolution(Γ), NSol(Γ) Input: A conjunctive formula ϕ over relations from Γ and an assignment m to the variables occurring in ϕ, which is not required to satisfy ϕ. Solution: An assignment m ′ satisfying ϕ (i.e. a codeword of the code described by ϕ). Objective: Minimum Hamming distance hd(m, m ′ ).
Note that the problem generalizes the MinOnes problem from [22]. Indeed, if we take the all-zero assignment m = 0 · · · 0 as part of the input, we get exactly the MinOnes problem as a special case.
Theorem 3 (illustrated in Figure 1). For a given Boolean constraint language Γ the optimization problem NSol(Γ) is Given a constraint and one of its solutions, the second problem asks for another solution closest to the given one.
The difference between the problems NearestSolution and NearestOtherSolution is the knowledge, or its absence, whether the input assignment satisfies the constraint. Moreover, for NearestSolution we may output the given assignment if it satisfies the formula while for NearestOtherSolution we have to output an assignment different from the one given as the input.
Proof. The proof is split into several propositions presented in Section 6. The third problem does not take any assignments as input, but asks for two solutions which are as close to each other as possible. We optimize once more the Hamming distance between the solutions. The MinSolutionDistance problem enlarges the notion of minimum distance of an error correcting code. The following theorem is a more fine-grained analysis of the result published by Vardy in [31], extended to an optimization problem.
Proof. The proof is split into several propositions presented in Section 7. (iii) For Γ ⊆ iI, every formula ϕ over Γ has at least two solutions since it is both 0-valid and 1-valid.
Thus TwoSolutionSAT(Γ) is in P, and Proposition 54 yields that MSD(Γ) is n-approximable. By Proposition 56 this approximation is indeed tight. (iv) According to [20], AnotherSAT(Γ) is NP-hard for iI 0 ⊆ Γ , or iI 1 ⊆ Γ . By Lemma 51 it follows that TwoSolutionSAT(Γ) is NP-hard, too. For iN 2 ⊆ Γ we can reduce the NP-hard problem SAT(Γ) to TwoSolutionSAT(Γ). Hence it is NP-complete to decide whether a feasible solution for MSD(Γ) exists in all three cases.
The three optimization problems can be transformed into decision problems in the usual way. We add an integer bound k to the input and ask if the Hamming distance satisfies the inequality hd(m, m ′ ) ≤ k. This way we obtain the corresponding decision problems NOSol d , NSol d , and MSD d , respectively. Their complexity follows immediately from the theorems above. All cases in PO become polynomial-time decidable, whereas the other cases, which are APX-hard, become NP-complete. This way we obtain dichotomy theorems classifying the decision problems as polynomial or NP-complete for all sets Γ of relations. We obtain the following dichotomies for each of the respective decision problems.
in poly-APX and tight feasibility NP-complete not applicable Figure 2: Lattice of co-clones with complexity classification for NOSol.

Applicability of Clone Theory and Duality
We show that clone theory is applicable to the problem NSol, as well as a possibility to exploit inner symmetries between co-clones, which shortens several proofs in the following sections.

Nearest Solution
There are two natural versions of NSol(Γ). In one version the formula ϕ is quantifier free while in the other one we do allow existential quantification. We call the former version NSol(Γ) and the latter NSol pp (Γ) and show that both versions are equivalent. Let NSol d (Γ) and NSol d pp (Γ) be the decision problems corresponding to NSol(Γ) and NSol pp (Γ), asking whether there is a satisfying assignment within a given bound.

Proposition 7. For any constraint language Γ we have the equivalences NSol
Proof. The reduction from left to right is trivial in both cases. For the other direction, consider first an instance of NSol d pp (Γ) with formula ϕ, assignment m, and bound k. Let x 1 , . . . , x n be the free variables of ϕ and let y 1 , . . . , y ℓ be the existentially quantified ones, which we can assume to be disjoint. By discarding variables y i while not changing [ϕ], we can assume that each variable y i occurs in at least one atom of ϕ. We construct a quantifier-free formula ϕ ′ , where the non-quantified variables of ϕ get duplicated by a factor λ := (n + ℓ + 1) 2 such that the effect of quantified variables becomes negligible. For each variable z we define the set B(z) as follows: For every atom R(z 1 , . . . , z s ) in ϕ, the quantifier-free formula ϕ ′ over the variables Moreover, we construct an assignment B(m) of ϕ ′ by assigning to every variable x j i the value m(x i ) and to y i the value 0. Note that because there is an upper bound on the arities of relations from Γ, this is a polynomial time construction.
Remark 8. Note that in the reduction from NSol d pp (Γ) to NSol d (Γ) we construct the assignment B(m) as an extension of m by setting all new variables to 0. In particular, if m is the constant 0-assignment, then so is B(m). We use this observation as we continue.
The following result is a technical lemma, which allows us to consider constraints with disjoint variables independently.
Lemma 9. Let ϕ(x, y) = ψ(x) ∧ χ(y) be a Γ-formula over a constraint language Γ and m an assignment over disjoint variable blocks x and y. Let (ϕ, m) be an instance of NSol(Γ). Then Conversely, if s ψ ∈ [ψ] and s χ ∈ [χ] are optimal solutions for their respective problems, then s := s ψ ∪ s χ satisfies We can also show that introducing explicit equality constraints does not change the complexity of our problem. We need two introductory lemmas. The first one deals with equalities that do not interfere with the other atoms of the given formula. Proof. Let (ϕ, m) be an instance of NSol(Γ ∪ {≈}). Without loss of generality we assume ϕ to be of the form ψ ∧ ε, where ψ is a Γ-formula and ε is a {≈}-formula. Let (V i ) i∈I be the unique finest partition of the variables in ε satisfying that variables x, y are in the same partition class if x ≈ y occurs in ε.
For each index i ∈ I we designate a specific variable x i ∈ V i . Let ψ ′ be the formula obtained from ψ by substituting all occurrences of variables y ∈ V i by x i . Moreover, let I ′ be the set of indices i ∈ I such that x i actually occurs in ψ ′ , and let I ′′ := I I ′ be the set of indices without this property. We set ε ′ := i∈I ′ ε i and ε ′′ := i∈I ′′ ε i , where the formula ε i := y∈V i (x i ≈ y) expresses the equivalence of the variables in V i . Note that the formulas ψ ∧ ε and χ := ψ ′ ∧ ε ′ ∧ ε ′′ contain the same variables and have identical sets of models. Now consider the formula ϕ ′ := ψ ′ ∧ ε ′ and the assignment m ′ := m↾ V ′ , where V ′ is the set of variables occurring in ϕ ′ . The pair (ϕ ′ , m ′ ) is an NSol(Γ ∪ {≈}) instance with the additional properties stated in the lemma. By construction we have χ = ϕ ′ ∧ ε ′′ , where the set V ′ of variables in ϕ ′ and the set V ′′ of variables in ε ′′ are disjoint. By Lemma 9 we obtain OPT(ϕ, An optimal solution s ε ′′ of ε ′′ and the optimal value d := OPT(ε ′′ , m↾ V ′′ ) can obviously be computed in polynomial time. Therefore the instance (ϕ, m, k) of NSol d (Γ ∪ {≈}) corresponds to the instance (ϕ ′ , m ′ , k − d) of the restricted decision problem in the polynomial-time many-one reduction.
Moreover, if s ′ is an r-approximate solution of (ϕ ′ , m ′ ) for some r ≥ 1, then s := s ′ ∪ s ε ′′ is a solution of ϕ, and we have so the constructed solution s of ϕ is also r-approximate. This concludes the proof of the AP-reduction with factor α = 1.
When dealing with NSol(Γ∪{≈}), the previous lemma enables us to concentrate on instances where the formula ϕ has the form ψ(z 1 , . . . , z n , . . , V t are disjoint sets of variables, being also disjoint from the variables of the Γ-formula ψ. For each 1 ≤ i ≤ t the given assignment m can have equal distance to the zero vector and the all-ones vector on the variables in V i ∪ {x i }, or it can be closer to one of the constant vectors. It is convenient to group the equality constraints according to these three cases. The following lemma discusses how to remove those equality constraints, on whose variables m is not equidistant from 0 and 1. Lemma 11. Let Γ be a constraint language and ψ(z 1 , . . . , z n , x 1 , . . . , x α , v 1 , . . . , v β , w 1 , . . . , w γ ) be any Γ-formula containing precisely the distinct variables z 1 , . . . , z n , x 1 , . . . , x α , v 1 , . . . , v β and w 1 , . . . , w γ . Consider a formula

non-empty sets of variables that are pairwise disjoint and disjoint from the variables in
It is possible to construct a formula ψ ′ , whose size is polynomial in the size of ϕ, and an assignment M , one can produce an (r-approximate) solution of (ϕ, m) from any (r-approximate) solution of (ϕ ′ , M ) in polynomial time.
Proof. First, we describe how to construct the formula ψ ′ . We abbreviate Z := {z 1 , . . . , z n , x 1 , . . . , x α }, of variables as follows: For each atom R(u 1 , . . . , u q ) of ψ define a set of atoms take the union over all these sets and define ψ ′ as the conjunction of all its members, giving a formula over Z ∪ V ′ ∪ W ′ where V ′ = u∈V B(u) and W ′ = u∈W B(u). Adding again the equality constraints, where m has equal distance from 0 and 1 we get This is a polynomial time construction since the arities of relations in Γ are bounded.
Moreover, we define an assignment M to the variables u of ϕ ′ as follows: Kc . Using this, we shall prove below that OPT(ϕ ′ , M ) + d = OPT(ϕ, m). Thus, if S ′ now takes the role of an r-approximate solution of (ϕ ′ , M ) for some r ≥ 1, then it follows that Let subsequently S ′ be such that OPT(ϕ ′ , M ) = hd(S ′ , M ), and let s be a model of ϕ. Construct a model s ′ of ϕ ′ by putting s ′ (u) := s(u) for u ∈ Z ′ and s ′ (u) := s(u ′ ) for u ∈ B(u ′ ) and u ′ ∈ V ∪ W . As above we get hd(s, m) = hd(s ′ , M ) + d because the definitions imply By minimality of S ′ , we obtain hd(S ′′ , M ) ≤ hd(S ′ , M ) ≤ hd(s ′ , M ). If we additionally require that s be an optimal solution of (ϕ, m), then hd( Thus, the distances hd(S ′′ , M ), hd(S ′ , M ) and hd(s ′ , M ) coincide, which implies the desired equality The previous lemma, in fact, describes an AP-reduction from the specialized version of the problem NSol(Γ ∪ {≈}) discussed in Lemma 10 to an even more specialized variant (the analogous statement is true for the decision version-instances (ϕ, m, k) can be decided by considering (ϕ ′ , M, k −d) instead): namely all equality constraints touch variables in Γ-atoms and the given assignment has equal distance from the constant tuples on each variable block connected by equalities. In the next result we show how to remove also these equality constraints.
Proof. The reduction from left to right is trivial. For the other direction, consider first an instance of NSol d (Γ ∪ {≈}) with formula ϕ, assignment m, and bound k. Applying the reductions indicated in Lemmas 10 and 11, we can assume (also for NSol(Γ∪{≈})) that ϕ is of the form ψ ∧ α a=1 x∈I ′ a (x a ≈ x) with a Γ-formula ψ containing the distinct variables z 1 , . . . , z n , x 1 , . . . , x α (n ≥ 0, α ≥ 1) and nonempty disjoint (from each other and from ψ) variable sets I ′ a for 1 ≤ a ≤ α. Moreover, we can suppose that hd(m↾ Ia , 0) = hd(m, ↾ Ia , 1) =: c a for all 1 ≤ a ≤ α, where I a denotes the set I ′ a ∪ {x a }. We define c := α a=1 c a , and we choose some ℓ-element index set I such that α/ℓ < 1, that is, ℓ ≥ α + 1 (we shall place another condition on ℓ at the end). We construct a formula ϕ ′ as follows: Take the union over all these sets and let ϕ ′ be the conjunction of all atoms in this union. This construction can be carried out in polynomial time since there is a bound on the arities of relations in Γ. Define an assignment M by M (x a ) := m(x a ) for 1 ≤ a ≤ α and M (z j,i ) := m(z j ) for 1 ≤ j ≤ n and i ∈ I. We claim that existence of solutions for (ϕ, m, k) can be decided by checking for solutions of (ϕ ′ , M, ℓ(k − c) + α). The argument is similar to that of Proposition 7: ψ is (un)satisfiable if and only if ϕ and ϕ ′ are, so we have a correct answer in the unsatisfiable case. Otherwise, consider a solution s to (ϕ, m, k). Letting Z := {z 1 , . . . , z n }, we have Conversely, let S ′ be a solution of (ϕ ′ , M, ℓ(k − c) + α). As in Proposition 7 we can construct a solution S ′′ being constant on Suppose now that S ′ is an r-approximate solution for (ϕ ′ , M ) for some r ≥ 1, i.e. we have hd(S ′ , M ) ≤ r OPT(ϕ ′ , M ). Constructing a model S of ϕ as before, we obtain ℓ hd(S↾ Z , m↾ Z ) ≤ hd(S ′ , M ) ≤ r OPT(ϕ ′ , M ). Further from an optimal solution of ϕ, we get a model s ′ of ϕ ′ satisfying Multiplying this inequality by r, combining it with previous inequalities and dividing by ℓ we thus have then we would have a unique optimal model of ϕ, namely m. Then m↾ I 1 would have to be constant, implying hd(m↾ I 1 , 0) = hd(m↾ I 1 , 1), as one distance would be zero and the other one (1)). This demonstrates an AP-reduction with factor 1.
Propositions 7 and 12 allow us to switch freely between formulas with quantifiers and equality and those without. Hence we may derive upper bounds in the setting without quantifiers and equality while using the latter in hardness reductions. In particular, we can use pp-definability when implementing a constraint language Γ by another constraint language Γ ′ . Hence it suffices to consider Post's lattice of co-clones to characterize the complexity of NSol(Γ) for every finite constraint language Γ.

Corollary 13. For constraint languages
Next we prove that, in certain cases, unit clauses in the formula do not change the complexity of NSol.
Proof. The direction from left to right is obvious. For the other direction, we give an AP-reduction from . The latter is AP-equivalent to NSol(Γ) by Proposition 12.
The idea of the construction is to introduce two sets of variables y 1 , . . . , y n 2 and z 1 , . . . , z n 2 such that in any feasible solution all y i and all z i take the same value. By setting m(y i ) = 1 and m(z i ) = 0 for each i, any feasible solution m ′ of small Hamming distance to m will have m ′ (y i ) = 1 and m ′ (z i ) = 0 for all i as well, because deviating from this would be prohibitively expensive. Finally, we simulate the unary relations x and ¬x by x ≈ y 1 and x ≈ z 1 , respectively. We now describe the reduction formally.
Consider a formula ϕ over Γ ∪ {[x], [¬x]} with the variables x 1 , . . . , x n and an assignment m. If (ϕ, m) fails to have feasible solutions, i.e., if ϕ is unsatisfiable, we can detect this in polynomial time by the assumption of the lemma and return the generic unsatisfiable instance ⊥. Otherwise, we construct a (Γ ∪ {≈})-formula ϕ ′ over the variables x 1 , . . . x n , y 1 , . . . , y n 2 , z 1 , . . . , z n 2 and an assignment m ′ . We obtain ϕ ′ from ϕ by replacing every occurrence of a constraint [x] by x ≈ y 1 and every occurrence of [¬x] by x ≈ z 1 . Finally, we add the atoms y i ≈ y 1 and z i ≈ z 1 for all i ∈ {2, . . . , n 2 }. Let m ′ be the assignment of the variables of ϕ ′ given by m ′ (x i ) = m(x i ) for each i ∈ {1, . . . , n}, and m ′ (y i ) = 1 and m ′ (z i ) = 0 for all i ∈ {1, . . . , n 2 }. To any feasible solution m ′′ of ϕ ′ we assign g(ϕ, m, m ′′ ) as follows.
1. If ϕ is satisfied by m, we define g(ϕ, m, m ′′ ) to be equal to m.
2. Else if m ′′ (y i ) = 0 holds for all i ∈ {1, . . . , n 2 } or m ′′ (z i ) = 1 for all i ∈ {1, . . . , n 2 }, we define g(ϕ, m, m ′′ ) to be any satisfying assignment of ϕ. 3. Otherwise, we have m ′′ (y i ) = 1 and m ′′ (z i ) = 0 for all i ∈ {1, . . . , n 2 }. In this case we define g(ϕ, m, m ′′ ) to be the restriction of m ′′ onto x 1 , . . . , x n . Observe that all variables y i and all z i are forced to take the same value in any feasible solution, respectively, so g(ϕ, m, m ′′ ) is always well-defined. The construction is an AP-reduction. Assume that m ′′ is an r-approximate solution. We will show that g(ϕ, m, m ′′ ) is also an r-approximate solution.
Case 1: g(ϕ, m, m ′′ ) computes the optimal solution, so there is nothing to show.
Case 2: Observe first that ϕ has a solution because otherwise it would have been mapped to ⊥ and m ′′ would not exist. Thus, g(ϕ, m, m ′′ ) is well-defined and feasible by construction. Observe that m ′ and m ′′ disagree on all y i or on all z i , so we have hd(m ′ , m ′′ ) ≥ n 2 . Moreover, since ϕ has a feasible solution, it follows that OPT(ϕ ′ , m ′ ) ≤ n. Since m ′′ is an r-approximate solution, we have that n OPT(ϕ ′ , m ′ ) ≤ n 2 ≤ hd(m ′ , m ′′ ) ≤ r OPT(ϕ ′ , m ′ ). If OPT(ϕ ′ , m ′ ) = 0, then m ′ would have to be a model of ϕ ′ , and so would be its restriction to the x i , i.e. m, a model of ϕ. This is handled in the first case, which is disjoint from the current one; hence, we infer n ≤ r. Consequently, the distance hd(m, g(ϕ, m, m ′′ )) is bounded above by n ≤ r ≤ r · OPT(ϕ, m), where the last inequality holds because ϕ is not satisfied by m and thus the distance of any optimal solution from m is at least 1.

Inapplicability of Clone Closure
Corollary 13 shows that the complexity of NSol is not affected by existential quantification by giving an explicit reduction from NSol pp to NSol. It does not seem possible to prove the same for NOSol and MSD. However, similar results hold for the conjunctive closure; thus we resort to minimal or irredundant weak bases of co-clones instead of usual bases. Proof. We prove only the part that Γ ′ ⊆ Γ ∧ implies NOSol(Γ ′ ) ≤ AP NOSol(Γ). The other results will be clear from that reduction since the proof is generic and therefore holds for both NOSol and MSD, as well as for their decision variants.
Let a Γ ′ -formula ϕ be an instance of NOSol(Γ ′ ). Since Γ ′ ⊆ Γ ∧ , every constraint R(x 1 , . . . , x k ) of ϕ can be written as a conjunction of constraints over relations from Γ. Substitute the latter into ϕ, obtaining ϕ ′ . Now ϕ ′ is an instance of NOSol(Γ), where ϕ ′ is only polynomially larger than ϕ. As ϕ and ϕ ′ have the same variables and hence the same models, also the closest distinct models of ϕ and ϕ ′ are the same.

Duality
Given a relation R ⊆ {0, 1} n , its dual relation is dual(R) = {m | m ∈ R}, i.e., the relation containing the complements of tuples from R. Duality naturally extends to sets of relations and co-clones. We define dual(Γ) = {dual(R) | R ∈ Γ} as the set of dual relations to Γ. Since taking complements is involutive, duality is a symmetric relation. If a relation R ′ (a set of relations Γ ′ ) is a dual relation to R (a set of dual relations to Γ), then R (Γ) is also dual to R ′ (to Γ ′ ). By a simple inspection of the bases of co-clones in Table 2, we can easily see that many co-clones are dual to each other. For instance iE 2 is dual to iV 2 . The following proposition shows that it is sufficient to consider only one half of Post's lattice of co-clones.
Proof. Let ϕ be a Γ-formula and m an assignment to ϕ. We construct a dual(Γ)-formula ϕ ′ by substitution of every atom R(x) by dual(R)(x). The assignment m satisfies ϕ if and only if m satisfies ϕ ′ , where m is the pointwise complement of m. Moreover, hd(m, m ′ ) = hd(m, m ′ ).

Finding the Nearest Solution
This section contains the proof of Theorem 3. We first consider the polynomial-time cases followed by the cases of higher complexity.

Polynomial-Time Cases
Proposition 17. If a constraint language Γ is both bijunctive and affine (Γ ⊆ iD 1 ), then NSol(Γ) can be solved in polynomial time.
by Corollary 13. Every Γ ′ -formula ϕ is equivalent to a linear system of equations over the Boolean ring Z 2 of type x ⊕ y = 1 and x = 1. Substitute the fixed values x = 1 into the equations of the type x ⊕ y = 1 and propagate. If a contradiction is found thereby, reject the input. After an exhaustive application of this rule only equations of the form x ⊕ y = 1 remain. For each of them put an edge {x, y} into E, defining an undirected graph G = (V, E), whose vertices V are the unassigned variables. If G is not bipartite, then ϕ has no solutions, so we can reject the input. Otherwise, compute a bipartition V = L∪R. We assume that G is connected; if not perform the following algorithm for each connected component (cf. Lemma 9). Assign the value 0 to each variable in L and the value 1 to each variable in R, and [x] determine a unique value for the respective variable, therefore we can eliminate unit clauses and propagate the values. If a contradiction occurs, we reject the input. It thus remains to consider formulas ϕ containing only binary implicative clauses of type x → y.
Let V be the set of variables in ϕ, and for i ∈ {0, 1} let V i = {x ∈ V | m(x) = i} be the variables mapped to value i by assignment m. We transform the formula ϕ to a linear programming problem as follows. For each clause x → y we add the inequality y ≥ x, and for each variable x ∈ V we add the constraints x ≥ 0 and x ≤ 1. As linear objective function we use f (x) = x∈V 0 x+ x∈V 1 (1−x). For an arbitrary solution m ′ , it returns the number of variables that change their parity between m and m ′ , i.e., f (m ′ ) = hd(m, m ′ ). This way we obtain the (integer) linear programming problem (f, Ax ≥ b), where A is a totally unimodular matrix and b is an integral column vector.
The rows of A consist of the left-hand sides of inequalities y − x ≥ 0, x ≥ 0, and −x ≥ −1, which constitute the system Ax ≥ b. Every entry in A is 0, +1, or −1. Every row of A has at most two non-zero entries. For the rows with two entries, one entry is +1, the other is −1. According to Condition (iv) in Theorem 19.3 in [30], this is a sufficient condition for A being totally unimodular. As A is totally unimodular and b is an integral vector, f has integral minimum points, and one of them can be computed in polynomial time (see e.g. [30,Chapter 19]).

Hard Cases
We start off with an easy corollary of Schaefer's dichotomy.
Proposition 19. Let Γ be a finite set of Boolean relations. If iN 2 ⊆ Γ , then it is NP-complete to decide whether a feasible solution exists for NSol(Γ); otherwise, NSol(Γ) ∈ poly-APX.
Let (ϕ, m) be an instance of NSol(Γ). We give an n-approximate algorithm for the other cases, where n denotes the number of variables in ϕ. If m satisfies ϕ, return m. Otherwise compute an arbitrary solution m ′ of ϕ, which can be done in polynomial time by Schaefer's theorem. This algorithm is n-approximate: If m satisfies ϕ, the algorithm returns the optimal solution; otherwise we have OPT(ϕ, m) ≥ 1 and hd(m, m ′ ) ≤ n, hence the answer m ′ of the algorithm is n-approximate.

APX-Complete Cases
We start with reductions from the optimization version of vertex cover. Since the relation [x ∨ y] is a straightforward Boolean encoding of vertex cover, we immediately get the following result.
We discuss the former case, the latter one being symmetric and provable from the first one by Proposition 16.
We Proof. Γ ′ := {[x ⊕ y], [x → y]} is a base of iD 2 . By Corollary 13 it suffices to show that NSol(Γ ′ ) is in APX. Let (ϕ, m) be an instance of this problem. Feasibility for ϕ can be encoded as an integer program as follows: Every constraint x ⊕ y induces an equation x + y = 1, every constraint x → y an inequality x ≤ y. If we restrict all variables to {0, 1} by the appropriate inequalities, it is clear that an assignment m ′ satisfies ϕ if it satisfies the linear system with inequality side conditions. As objective function we use f (x) := x∈V 0 x + x∈V 1 (1 − x), where V i is the set of variables mapped to i by m. Clearly, for every solution m ′ we have f (m ′ ) = hd(m, m ′ ). The 2-approximation algorithm from [17] for integer linear programs, where every inequality contains at most two variables, completes the proof.
} is a base of iS ℓ 00 . By Corollary 13 it suffices to show that NSol(Γ ′ ) is in APX. Let (ϕ, m) be an instance of this problem. We use an approach similar to the one for the corresponding case in [22], again writing ϕ as an integer program. We write constraints x 1 ∨ · · · ∨ x ℓ as inequalities x 1 + · · · + x ℓ ≥ 1, constraints x → y as x ≤ y, ¬x as x = 0, and x as x = 1. Moreover, we add x ≥ 0 and x ≤ 1 for each variable x. It is easy to check that the feasible Boolean solutions of ϕ and of the linear system coincide. As objective function we use f (x) := x∈V 0 x + x∈V 1 (1 − x), where V i is the set of variables mapped to i by m. Clearly, for every solution m ′ we have f (m ′ ) = hd(m, m ′ ). Therefore it suffices to approximate the optimal solution for the integer linear program.
To this end, let m ′′ be a (generally non-integer) solution to the relaxation of the linear program, which can be computed in polynomial time. We construct m ′ by setting m ′ (x) = 0 if m ′′ (x) < 1/ℓ and m ′ (x) = 1 if m ′′ (x) ≥ 1/ℓ. As ℓ ≥ 2, we get hd(m, m ′ ) = f (m ′ ) ≤ ℓf (m ′′ ) ≤ ℓ · OPT(ϕ, m). It is easy to check that m ′ is a feasible solution, which completes the proof.

NearestCodeword-Complete Cases
This section essentially uses the facts that MinOnes is NearestCodeword-complete for the co-clone iL 2 and that it is a special case of NSol. The following result was stated by Khanna et al. for completeness via A-reductions [22, Theorem 2.14]. A closer look at the proof reveals that it also holds for the stricter notion of completeness via AP-reductions that we use. In total we thus have that WeightedMinOnes(Γ) AP-reduces to WeightedMinCSP({odd 3 , even 3 }). We conclude by observing that MinOnes is a particular case of WeightedMinOnes and that NearestCodeword is the same as WeightedMinCSP({odd 3 , even 3 }), yielding MinOnes(Γ) ≤ AP NearestCodeword.  We proceed by reducing NSol(Γ ′ ) to a subproblem of NSol pp (Γ ′ ), where only instances (ϕ, 0) are considered. Then, using Proposition 7 and Remark 8, this reduces to a subproblem of NSol(Γ ′ ) with the same restriction on the assignments, which is exactly MinOnes(Γ ′ ). Note that [x ⊕ y] is equal to ∃z∃z ′ (even 4 (x, y, z, z ′ ) ∧ ¬z ∧ z ′ ) so we can freely use [x ⊕ y] in any Γ ′ -formula. Let formula ϕ and assignment m be an instance of NSol(Γ ′ ). We copy all clauses of ϕ to ϕ ′ . For each variable x of ϕ for which m(x) = 1, we take a new variable x ′ and add the constraint x ⊕ x ′ to ϕ ′ . Moreover, we existentially quantify x. Clearly, there is a bijection I between the satisfying assignments of ϕ and those of ϕ ′ : For every solution s of ϕ we get a solution I(s) of ϕ ′ by setting for each x ′ introduced in the construction of ϕ ′ the value I(s)(x ′ ) to the complement of s(x). Moreover, we have that hd(m, s) = hd(0, I(s)). This yields a trivial AP-reduction with α = 1.
Proof. These results are stated in [22, Theorem 2.14] for completeness via A-reductions. The actual proof in [22,Lemma 8.7 and Lemma 8.14], however, uses AP-reductions, hence the results also hold for our stricter notion of completeness.
Proof. Let formula ϕ and assignment m be an instance of NSol({x ∨ y ∨ ¬z}) over the variables x 1 , . . . , x n . Let V 1 be the set of variables x i with m(x i ) = 1. We construct a {x ∨ y ∨ ¬z, x ∨ y}formula ϕ ′ by adding to ϕ for each We set the weights of the variables of ϕ ′ as follows. For x i ∈ V 1 we set w(x i ) = 0, all other variables get weight 1.
To each satisfying assignment m ′ of ϕ ′ we construct the assignment m ′′ which is the restriction of m ′ to the variables of ϕ. This construction is an AP-reduction.
Note that m ′′ is feasible if m ′ is. Let m ′ be an r-approximation of OPT(ϕ ′ ). Note that whenever for The other way round, we may assume that whenever If this is not the case, then we can change m ′ accordingly, decreasing the weight that way. It follows that w(m ′ ) = n 0 + n 1 where we have Proof. Since {x ∨ y ∨ ¬z, x, ¬x} is a base of iV 2 , by Corollary 13 it suffices to prove the reduction NSol({x ∨ y ∨ ¬z, x, ¬x}) ≤ AP WeightedMinOnes({x ∨ y ∨ ¬z, x ∨ y}). To this end, first reduce NSol({x ∨ y ∨ ¬z, x, ¬x}) to NSol(x ∨ y ∨ ¬z) by Proposition 14 and then use Lemma 28.  by Corollary 13. The problem of finding feasible solutions of NSol(Γ), where iN ⊆ Γ ⊆ iI 0 or iN ⊆ Γ ⊆ iI 1 , is polynomial-time decidable. Indeed, such Γ is 0-or 1-valid, therefore the all-zero or all-one tuple is always guaranteed to exist as a feasible solution. Therefore Proposition 14 implies NSol({dup 3 , x}) ≡ AP NSol({dup 3 }); the latter problem reduces to NSol(Γ) because dup 3 ∈ iN ⊆ Γ and Corollary 13.

Finding Another Solution Closest to the Given One
In this section we study the optimization problem NearestOtherSolution. We first consider the polynomialtime cases and then the cases of higher complexity.

Polynomial-Time Cases
Since we cannot take advantage of clone closure, we must proceed differently. We use the following result based on a theorem by Baker and Pixley [5].
Proposition 32 (Jeavons et al. [19]). Every bijunctive constraint R(x 1 , . . . , x n ) is equivalent to the conjunction 1≤i≤j R ij (x i , x j ), where R ij is the projection of R to the coordinates i and j.
Proof. According to Proposition 32 we may assume that the formula ϕ is a conjunction of atoms R(x, y) or a unary constraint R(x, x) of the form Proposition 34. If Γ ⊆ iS k 00 or Γ ⊆ iS k 10 for some k ≥ 2 then NOSol(Γ) is in PO.
Proof. We perform the proof only for iS k 00 . Proposition 16 implies the same result for iS k 10 . The co-clone iS k 00 is generated by In fact, Γ ′ is even a plain base of iS k 00 [12], meaning that every relation in Γ can be expressed as a conjunctive formula over relations in Γ ′ , without existential quantification or explicit equalities. Hence we may assume that ϕ is given as a conjunction of Γ ′ -atoms.
Note that x ∨ y is a polymorphism of Γ ′ , i.e., for any two solutions m 1 , m 2 of ϕ their disjunction for all x -is also a solution of ϕ. Therefore we get the optimal solution m ′ of an instance (ϕ, m) by flipping in m either some ones to zeros or some zeros to ones, but not both. To see this, assume the optimal solution m ′ flips both ones and zeros. Then m ′ ∨ m is a solution of ϕ that is closer to m than m ′ , which contradicts the optimality of m ′ .
Unary constraints fix the value of the constrained variable and can be eliminated by propagating the value to the other clauses (including removal of disjunctions containing implied positive literals and shortening disjunctions containing implied negative literals). This propagation does not lead to contradictions since m is a model of ϕ. For each of the remaining variables, x, we attempt to construct a model m x of ϕ with m x (x) = m(x) such that hd(m x , m) is minimal among all models with this property. This can be done in polynomial time as described below. If the construction of m x fails for every variable x, then m is the sole model of ϕ and the problem is not solvable. Otherwise choose one of the variables x for which hd(m x , m) is minimal and return m x as second solution m ′ .
It remains to describe the computation of m x . If m(x) = 0, we flip x to 1 and propagate this change iteratively along the implications, i.e., if x → y is a constraint of ϕ and m(y) = 0, we flip y to 1 and iterate. This kind of flip never invalidates any disjunctions, it could only lead to contradictions with conditions imposed by negative unit clauses (and since their values were propagated before such a contradiction would be immediate). For m(x) = 1 we proceed dually, flipping x to 0, removing x from disjunctions if applicable, and propagating this change backward along implications y → x where m(y) = 1. This can possibly lead to immediate inconsistencies with already inferred unit clauses, or it can produce contradictions through empty disjunctions, or it can create the necessity for further flips from 0 to 1 in order to obtain a solution (because in a disjunctive atom all variables with value 1 have been flipped, and thus removed). In all these three cases the resulting assignment does not satisfy ϕ, and there is no model that differs from m in x and that can be obtained by flipping in one way only. Otherwise, the resulting assignment satisfies ϕ, and this is the desired m x . Our process terminates after flipping every variable at most once, since we flip only in one way (from zeros to ones or from ones to zeros). Thus, m x is computable in polynomial time.
Proof. Finding a feasible solution to NOSol(Γ) is exactly the problem AnotherSAT(Γ) which is NPhard if and only if iI 1 ⊆ Γ or iI 0 ⊆ Γ according to Juban [20]. If AnotherSAT(Γ) is polynomialtime decidable, we can always find a feasible solution for NOSol(Γ) if it exists. Obviously, every feasible solution is an n-approximation of the optimal solution, where n is the number of variables in the input.

Tightness results
It will be convenient to consider the following decision problem asking for another solution that is not the complement, i.e., that does not have maximal distance from the given one.
Problem: AnotherSAT nc (Γ) Input: A conjunctive formula ϕ over relations from Γ and an assignment m satisfying ϕ. Question: Is there another satisfying assignment m ′ of ϕ, different from m, such that hd(m, m ′ ) < n, where n is the number of variables in ϕ?
Remark 36. AnotherSAT nc (Γ) is NP-complete for iI 0 ⊆ Γ and iI 1 ⊆ Γ , since already AnotherSAT(Γ) is NP-complete for these cases, as shown in [20]. Moreover, AnotherSAT nc (Γ) is polynomial-time decidable if Γ is Horn (Γ ⊆ iE 2 ), dual Horn (Γ ⊆ iV 2 ), bijunctive (Γ ⊆ iD 2 ), or affine (Γ ⊆ iL 2 ), for the same reason as for AnotherSAT(Γ): For each variable x i we flip the value of m[i], substitute m(x i ) for x i , and construct another satisfying assignment if it exists. Consider now the solutions which we get for every variable x i . Either there is no solution for any variable, then AnotherSAT nc (Γ) has no solution; or there are only the solutions which are the complement of m, then AnotherSAT nc (Γ) has no solution as well; or else we get a solution m ′ with hd(m, m ′ ) < n, leading also to a solution for AnotherSAT nc (Γ). Hence, taking into account Proposition 38 below, we obtain a dichotomy result also for AnotherSAT nc (Γ).
Note that AnotherSAT nc (Γ) is not compatible with existential quantification. Let ϕ(y, x 1 , . . . , x n ) with model m be an instance of AnotherSAT nc (Γ) and let m ′ be a solution satisfying hd(m, m ′ ) < n+1. Now consider the formula ϕ 1 (x 1 , . . . , x n ) = ∃y ϕ(y, x 1 , . . . , x n ), obtained by existentially quantifying the variable y, and the tuples m 1 and m ′ 1 obtained from m and m ′ by omitting the first component. Both, m 1 and m ′ 1 , are still solutions of ϕ ′ , but we cannot guarantee hd(m 1 , m ′ 1 ) < n. Hence we need the equivalent of Proposition 15 for this problem, whose proof is analogous.
Proof. Containment in NP is clear, it remains to show hardness. Since AnotherSAT nc is not compatible with existential quantification, we cannot use clone theory, but have to consider the three co-clones iN 2 , iN, and iI separately and make use of minimal weak bases.
Case Γ = iN: Putting R := {000, 101, 110}, we give a reduction from AnotherSAT({R}), which is NP-hard [20] as {R} = iI 0 . The problem remains NP-complete if we restrict it to instances (ϕ, 0), since R is 0-valid and any given model m other than the constant 0-assignment admits the trivial solution m ′ = 0. Thus we can perform a reduction from this restricted problem.
Consider the relation R iN = {0000, 1010, 1100, 1111, 0101, 0011}. Given a formula ϕ over R, we construct a formula ψ over R iN by replacing every constraint R(x, y, z) with R iN (x, y, z, w), where w is a new global variable. Moreover, we set m to the constant 0-assignment. This construction is a many-one reduction from the restricted version of AnotherSAT({R}) to AnotherSAT nc ({R iN }).
To see this, observe that the tuples in R iN that have a 0 in the last coordinate are exactly those in R × {0}. Thus any solution of ϕ can be extended to a solution of ψ by assigning 0 to w. Conversely we observe that any solution m ′ of the AnotherSAT nc ({R iN }) instance (ψ, 0) is different from 0 and 1. As R iN is complementive, we may assume m ′ (w) = 0. Then m ′ restricted to the variables of ϕ solves the AnotherSAT({R}) instance (ϕ, 0).
Finally, observe that R iN is a minimal weak base and Γ is a base of the co-clone iN, therefore we have R iN ∈ Γ ∧ by Theorem 1. Now the NP-hardness of AnotherSAT nc (Γ) follows from the one of AnotherSAT nc ({R iN }) by Proposition 37.
Case Γ = iN 2 : We give a reduction from AnotherSAT nc ({R iN }), which is NP-hard by the previous case. By Theorem 1, Γ ∧ contains the relation R iN 2 = {mm | m ∈ R iN }. For an R iNformula ϕ(x 1 , . . . , x n ), we construct an R iN 2 -formula ψ(x 1 , . . . , x n , x ′ 1 , . . . , x ′ n ) by replacing every constraint R iN (x, y, z, w) with R iN 2 (x, y, z, w, x ′ , y ′ , z ′ , w ′ ). Assignments m for ϕ extend to assignments M for ψ by setting M (x ′ ) := m(x). Conversely, assignments for ψ yield assignments for ϕ by restricting them to the variables in ϕ. Because every variable x 1 , . . . , x n assigned by models of ϕ actually occurs in some R iN -atom in ϕ and hence in some R iN 2 -atom of ψ, and because of the structure of R iN 2 , any model of ψ distinct from M and M restricts to a model of ϕ other than m or m. Consequently, this construction is again a reduction from AnotherSAT nc ({R iN }) to AnotherSAT nc ({R iN 2 }), reducing itself to AnotherSAT nc (Γ) by Proposition 37.
Case Γ = iI: We proceed as in Case Γ = iN, but use R iI = {0000, 0011, 0101, 1111} instead of R iN , and {000, 011, 101} for R. Note that the R iI -tuples with first coordinate 0 are exactly those in {0}×R. The relation R iI is not complementive, but (as every variable assigned by any model of ψ occurs in some atomic R iI -constraint) the only solution m ′ such that m ′ (w) = 1 is the constant 1-assignment, which is ruled out by the requirement hd(m, m ′ ) < n. Hence we may again assume m ′ (w) = 0.
Proof. Assume that there is a constant ε > 0 with a polynomial-time n 1−ε -approximation algorithm for NOSol(Γ). We show how to use this algorithm to solve AnotherSAT nc (Γ) in polynomial time. Proposition 38 completes the proof.
Let (ϕ, m) be an instance of AnotherSAT nc (Γ) with n variables. If n = 1, then we reject the instance. Otherwise, we construct a new formula ϕ ′ and a new assignment m ′ as follows. Let k be the smallest integer greater than 1/ε. Choose a variable x of ϕ and introduce n k − n new variables x i for i = 1, . . . , n k − n. For every i ∈ {1, . . . , n k − n} and every constraint R(y 1 , . . . , y ℓ ) in ϕ, such that x ∈ {y 1 , . . . , y ℓ }, construct a new constraint R(z i 1 , . . . , z i ℓ ) by z i j = x i if y j = x and z i j = y j otherwise; add all the newly constructed constraints to ϕ in order to get ϕ ′ . Moreover, we extend m to a model of ϕ ′ by setting m ′ (x i ) = m(x). Now run the n 1−ε -approximation algorithm for NOSol(Γ) on (ϕ ′ , m ′ ). If the answer is m ′ then reject, otherwise accept.
We claim that the algorithm described above is a correct polynomial-time algorithm for the decision problem AnotherSAT nc (Γ) when Γ is complementive. Polynomial runtime is clear. It remains to show its correctness. If the only solutions to ϕ are m and m, then, as n > 1, the only models of ϕ ′ are m ′ and m ′ . Hence the approximation algorithm must answer m ′ and the output is correct. Now assume that there is a satisfying assignment m s different from m and m. The relation [ϕ] is complementive, hence we may assume that m s (x) = m(x). It follows that ϕ ′ has a satisfying assignment m ′ s for which 0 < hd(m ′ s , m ′ ) < n holds. But then the approximation algorithm must find a satisfying assignment m ′′ for ϕ ′ with hd(m ′ , m ′′ ) < n · (n k ) 1−ε = n k(1−ε)+1 . Since the inequality k > 1/ε holds, it follows that hd(m ′ , m ′′ ) < n k . Consequently, m ′′ is not the complement of m ′ and the output of our algorithm is again correct.
When Γ is not complementive but both 0-valid and 1-valid ( Γ = iI), we perform the expansion algorithm described above for each variable of the formula ϕ and reject if the result is the complement for each run. The runtime remains polynomial. If

MinDistance-Equivalent Cases
In this section we show that affine co-clones give rise to problems equivalent to MinDistance. We need to express an affine sum of even number of variables by means of the minimal weak base for each of the affine co-clones. In the following lemma, the existentially quantified variables are uniquely determined, therefore the existential quantifiers serve only to hide superfluous variables and do not pose any problems as they were mentioned before.
Proof. Write out the constraint relations following the existential quantifiers as (conjunctions of) equalities. From this uniqueness of valuations for the existentially quantified variables is easy to see, and likewise that any model of 2n i=1 x i = 0 also satisfies each of the formulas 1. up to 5. Adding up the equalities behind the existential quantifiers shows the converse direction.
The following lemma shows that MinDistance is AP-equivalent to a restricted version, containing only constraints generating the minimal weak base, for each co-clone in the affine case. Proof. Consider a co-clone B ∈ {iL, iL 0 , iL 1 , iL 2 , iL 3 } and a MinDistance-instance represented by a matrix A ∈ Z k×l 2 . If one of the columns of A, say the i-th, is zero, then the i-th unit vector is an optimal solution to this instance with optimal value 1. Hence, we assume from now on that none of the columns equals a zero vector.
Every row of A expresses the fact that a sum of n ≤ l variables equals zero. If n is odd, we extend this sum to one with n + 1 summands, thereby introducing a new variable v, which we existentially quantify and confine to zero using a unary [¬x]-constraint. Then we replace the expanded sum by the existential formula from Lemma 41 corresponding to the co-clone B under consideration. This way we have introduced only linearly many new variables in l for every row, and for any feasible solution for the MinDistance-problem the values of the existential variables needed to encode it are uniquely determined. Thus, taking the conjunction over all these formulas we only have a linear growth in the size of the instance.
Next, we show how to deal with the existential quantifiers: First we transform the expression to prenex normal form getting a formula ψ of the form ∃y 1 , . . . , y p (ϕ(y 1 , . . . , y p , x 1 , . . . , x l )), which holds if and only if Ax = 0 for x = (x 1 , . . . , x l ). We use the same blow-up construction regarding x 1 , . . . , x l as in Proposition 7 and Lemma 11 to make the influence of y 1 , . . . , y p on the Hamming distance negligible. For this we put J := {1, . . . , t} and introduce new variables x j i where 1 ≤ i ≤ l and j ∈ J. If u is among x 1 , . . . , x l , we define its blow-up set to be B(u) = {x j i | j ∈ J}, otherwise, for u ∈ {y 1 , . . . , y p }, we set B(u) = {u}. Now for each atom R(u 1 , . . . , u q ) of ϕ we form the set of atoms {R(u ′ 1 , . . . , u ′ q ) | (u ′ 1 , . . . , u ′ q ) ∈ q i=1 B(u i )}, and define the quantifier free formula ϕ ′ to be the conjunction of all atoms in the union of these sets. Note that this construction takes time polynomial in the size of ψ and hence in the size of the input MinDistance-instance whenever t is polynomial in the input size because the atomic relations in ψ are at most octonary.
If s is an assignment of values to x making Ax = 0 true, we define s ′ (x j i ) := s(x i ) and extend this to a model of ϕ ′ assigning the uniquely determined values to y 1 , . . . , y p . Let m ′ be the model arising in this way from the zero assignment m. If s ′ is any model of ϕ ′ , then for every 1 ≤ i ≤ l, all j ∈ J and each atom R(u 1 , . . . , u q ) of ϕ, s ′ satisfies, in particular, the conjunction . . , s ′ (x 1 l ))) both belong to the kernel of A and so does their difference, which is s ′ (x j i ) − s ′ (x 1 i ) times the i-th unit vector. As the i-th column of A is non-zero, we must have s ′ (x j i ) = s ′ (x 1 i ). This also implies that if s ′ is zero on x 1 1 , . . . , x 1 n , then it must be zero on all x j i (1 ≤ i ≤ l, j ∈ J) and thus it must coincide with m ′ . Therefore, every feasible solution to the NOSol-instance (ϕ ′ , m ′ ) yields a non-zero vector (s ′ (x 1 1 ), . . . , s ′ (x 1 l )) in the kernel of A. Further, if s ′ is an r-approximation to an optimal solution, i.e. we have hd(s ′ , m ′ ) ≤ r OPT(ϕ ′ , m ′ ), then, as s ′ (x 1 i ) = s ′ (x j i ) holds for all j ∈ J and all 1 ≤ i ≤ l, we obtain a solution to the MinDistance problem with Hamming weight w such that t · w ≤ hd(s ′ , m ′ ). Also, any optimal solution to the MinDistance-instance can be extended to a not-necessarily optimal solution s ′′ of (ϕ ′ , m ′ ), for which one can bound the distance to m ′ as follows: OPT(ϕ ′ , m ′ ) ≤ hd(s ′′ , m ′ ) ≤ t·OPT(A)+p. Combining these inequalities, we can infer t · w ≤ r · t · OPT(A) + r · p, or w ≤ OPT(A) · (r + r/ OPT(A) · p/t). We noted above that p is linearly bounded in the size of the input, thus choosing t quadratic in the size of the input bounds w by OPT(A)(r + o(1)), whence we have an AP-reduction with α = 1. Proof. If an instance (ϕ, m) does not have feasible solutions, then it does not have nearest other solutions either. So we map it to the generic unsolvable instance ⊥. Consider now formulas ϕ over variables x 1 , . . . , x n with models m where some feasible solution s 0 = m exists (and has been computed).
We can assume ϕ to be of the form ψ(x 1 , . . . , [¬x]})-formula and I 1 , I 0 ⊆ {1, . . . , n}. We transform ϕ to ϕ ′ := ψ(x 1 , . . . , x n ) ∧ i∈I 1 x i ≈ and extend models of ϕ to models of ϕ ′ in the natural way. Conversely, if s ′ is a model of ϕ ′ and s ′ (y i ) = 1 and s ′ (z i ) = 0 hold for all 1 ≤ i ≤ 1 + n 2 , then we can restrict it to a model of ϕ. Other models of ϕ ′ are not optimal and are mapped to s 0 . It is not hard to see that this provides an AP-reduction with α = 1.
Proof. Since we lack compatibility with existential quantification, we shall deal with each co-clone B = Γ in the interval {iL, iL 0 , iL 1 , iL 2 , iL 3 } separately. First we perform the reduction from Lemma 42 to NOSol({R B , [¬x]}). We need to find a reduction to NOSol({R B }) as this reduces to NOSol(Γ) by Proposition 15 and Theorem 1.
This is simple in the case of iL 0 and iL 2 since [¬x] = {x | R iL 0 (x, x, x, x)} ∈ {R iL 0 } ∧ (see Proposition 15) and [¬x] = {x | ∃y(R iL 2 (x, x, x, y, y, y, x, y))}, where the existential quantifier can be handled by an AP-reduction with α = 1 which drops the quantifier and extends every model by assigning 1 to all previously existentially quantified variables. Thereby (optimal) distances between models do not change at all.
In the remaining cases, we reduce [¬x]}) and the latter to NOSol({R B , ≈}) by Lemma 43, which now has to be reduced to NOSol({R B }). This is obvious for B = iL where equality constraints x ≈ y can be expressed as R iL (x, x, x, y) ∈ {R iL } ∧ (cf. Proposition 15). For iL 1 the same can be done using the formula ∃z(R iL 1 (x, y, z, z)), where the existential quantifier can be removed by the same sort of simple AP-reduction with α = 1 as employed for iL 2 . Finally, for iL 3 we want to express equality as ∃u∃v(R iL 3 (x, x, x, y, u, u, u, v)). Here, in an AP-reduction, the quantifiers cannot simply be disregarded, as the values of the existentially quantified variables are not constant for all models. They are uniquely determined by the values of x and y for each particular model, though, which allows us to perform a similar blow-up construction as in the proof of Lemma 42.
In more detail, given a {R iL 3 , ≈}-formula ψ containing variables x 1 , . . . , x l , first note that each atomic R iL 3 -constraint R iL 3 (x 1 , . . . , x 8 ) can be represented as a linear system of equations, namely Since equalities x i ≈ x j can be written as x i ⊕ x j = 0, the formula ψ is equivalent to an expression of the form Ax = b where x = (x 1 , . . . , x l ). Replacing each equality constraint by the existential formula above and bringing the result into prenex normal form, we get a formula ∃y 1 , . . . , y p (ϕ(y 1 , . . . , y p , x 1 , . . . , x l )), which is equivalent to ψ and where ϕ is a conjunctive {R iL 3 }-formula. By construction any two models of ϕ that agree on x 1 , . . . , x l must coincide. Thus, introducing variables x j i for 1 ≤ i ≤ l and j ∈ J := {1, . . . , t} and defining ϕ ′ in literally the same way as in the proof of Lemma 42, any model s of ψ yields a model s ′ of ϕ ′ by putting s ′ (x j i ) := s(x i ) for 1 ≤ i ≤ l and j ∈ J and extending this with the unique values for y 1 , . . . , y p satisfying ϕ(y 1 , . . . , y p , x 1 , . . . , x l ). In this way we obtain a model m ′ of ϕ ′ from a given solution m of ψ. Besides, if s ′ is any model of ϕ ′ , then as in Lemma 42, the vectors (s ′ (x 1 1 ), . . . , s ′ (x 1 l )) and . . , s ′ (x 1 l ))) both satisfy ψ, and thus their difference is in the kernel of A. Since the variable x i occurs in at least one of the atoms of ψ, the i-th column of A is non-zero, implying that s ′ (x j i ) = s ′ (x 1 i ) for j ∈ J and all 1 ≤ i ≤ l. Thus, any model s ′ = m ′ of ϕ ′ gives a model s = m of ψ by defining s(x i ) := s ′ (x 1 i ) for all 1 ≤ i ≤ l. The presented construction is an AP-reduction with α = 1, which can be proven completely analogously to the last paragraph of the proof of Lemma 42, choosing t quadratic in the size of ψ.

MinHornDeletion-Equivalent Cases
As in Proposition 38 the need to use conjunctive closure instead of causes a case distinction in the proof of the following result.
Lemma 45. If Γ is exactly dual Horn (iV ⊆ Γ ⊆ iV 2 ) then one of the following relations is in Γ ∧ : Proof. The co-clone Γ is equal to iV, iV 0 , iV 1 , or iV 2 . In the case Γ = iV the relation R iV belongs to Γ ∧ by Theorem 1; because of R iV (y, y, y, Lemma 46. If Γ is exactly dual Horn (iV ⊆ Γ ⊆ iV 2 ), then NOSol(Γ) is MinHornDeletion-hard.
Proof. There are four cases to consider, namely Γ ∈ {iV, iV 0 , iV 1 , iV 2 }. For simplicity we only present the situation where Γ = iV 1 ; the case Γ = iV 2 is very similar, and the other possibilities are even less complicated. At the end we shall give a few hints how to adapt the proof in these cases.
The basic structure of the proof is follows: we choose a suitable weak base of iV 1 consisting of an irredundant relation R 1 , and identify a relation H 1 ∈ {R 1 } ∧ which allows us to encode a sufficiently complicated variant of the MinOnes-problem into NOSol({H 1 }). Thus by Theorem 1 and Lemma 45 we have [22,Theorem 2.14(4)], MinHornDeletion is equivalent to MinOnes(∆) for constraint languages ∆ being dual Horn, not 0-valid and not implicative hitting set bounded+ with any finite bound, that is, if ∆ ∈ {iV 1 , iV 2 }. The key point of the construction is to choose R 1 and H 1 in such a way that we can find a relation G 1 satisfying The latter property will allow us to prove an AP-reduction MinHornDeletion ≡ AP MinOnes({G 1 }) ≤ AP NOSol(Γ ′ ), completing the chain.
We first check that R 1 = V 1 • χ 4 satisfies {R 1 } = iV 1 : namely, by construction, this relation is preserved by the disjunction and by the constant operation with value 1, i.e., R 1 ⊆ iV 1 . This inclusion cannot be proper, since 0 / ∈ R 1 ( R 1 ⊆ iI 0 ) and x ∨ (y ∧ z) / ∈ R 1 while x = (e 1 • β) ∨ (e 4 • β), y = (e 1 • β) ∨ (e 2 • β) and z = (e 1 • β) ∨ (e 3 • β) belong to V 1 • χ 4 (cf. before Theorem 2 for the notation), i.e. the generating function (x, y, z) → x ∨ (y ∧ z) of the clone S 00 [13, Figure 2, p. 8] fails to be a polymorphism of R 1 . For later we note that when β is chosen such that the coordinates of χ 4 are ordered lexicographically (and we are going to assume this from now on), then this failure can already be observed within the first seven coordinates of R 1 . Now according to Theorem 2, the sedenary relation R 1 := V 1 • χ 4 is a weak base relation for iV 1 without duplicate coordinates, and a brief moment of inspection shows that none of them is fictitious either. Therefore, R 1 is an irredundant weak base relation for iV 1 . We define H 1 to be {(x 0 , . . . , 1 and removing the bottom-element 0 of a non-trivial join-semilattice with top-element still yields a join-semilattice with top-element, we have G 1 ∈ iV 1 . With the analogous counterexample as for the relation R 1 above, we can show that (x, y, z) → x ∨ (y ∧ z) is not a polymorphism of G 1 (because the non-membership is witnessed among the first seven coordinates). Thus, {G 1 } = iV 1 ; in particular G 1 , and any relation conjunctively definable from it, is not 0-valid.
For each solution s of ϕ(x) there exists a solution s ′ of ϕ ′′ (x, y) with s ′ (y) = 1 (and s ′ (z) = 1). Each solution s ′ of ϕ ′′ has always s ′ (z) = 1 and either s ′ (y) = 0 or s ′ (y) = 1. Because every variable from x is part of one of the x i , the assignment m 0 restricted to (x, y, z) is the only solution s ′ of ϕ ′′ satisfying s ′ (y) = 0. If otherwise s ′ (y) equals 1, then s ′ restricted to the variables x satisfies ϕ(x), following the correspondence between the relations G 1 and H 1 .
In the case when Γ = iV 2 , the proof goes through with minor changes: R 2 = V 2 • χ 4 = R 1 {1}, so we define H 2 and G 2 like H 1 and G 1 just using R 2 and H 2 in place of R 1 and H 1 . Then we have Moreover, for the reduction we shall need an additional global variable w for ϕ ′′′ (and ϕ ′ ) since the encoding of the implication from Lemma 45 requires it (and forces it to zero in every model). For On a side note, we observe that H 0 = V 0 • χ 3 , which we can use alternatively without detouring via R 0 . Given the relationship between G 2 and H 0 , we do not need the global variable z in the definition of ϕ ′′ , but we need to have it in the definition of ϕ ′′′ , where the relation given by Lemma 45 necessitates atoms of the form (u

Finding the Minimal Distance Between Solutions
In this section we study the optimization problem MinSolutionDistance. We first consider the polynomialtime cases and then the cases of higher complexity.

Polynomial-Time Cases
We show that for bijunctive constraints the problem MinSolutionDistance can be solved in polynomial time. After stating the result we present an algorithm and analyze its complexity and correctness.
Proposition 48. If Γ is a bijunctive constraint language (Γ ⊆ iD 2 ) then MSD(Γ) is in PO.  [12] to see that every relation in Γ can be written as a conjunction of disjunctions of two not necessarily distinct literals. We shall treat these disjunctions as one-or two-element sets of literals when extending the algorithm of Aspvall, Plass, and Tarjan [2] to compute the minimum distance between distinct models of a bijunctive constraint formula.

By
Algorithm BIJUNCTIVE MSD Input: An iD 2 -formula ϕ given as a collection of one-or two-element sets of literals (bijunctive clauses). Output: "≤ 1 model" or the minimal Hamming distance of any two distinct models of ϕ.

Method:
Let V be the set of variables occurring in ϕ. Let L := {v, ¬v | v ∈ V} be the set of literals. Letū denote the literal complementary to u ∈ L.

Construct the relation
Let ≤ be the reflexive and transitive closure of R, i.e. the least preorder on L extending R. Construct the sets Return min{|L| | L ∈ L ′ /∼} as minimal Hamming distance.

End of algorithm
Complexity: The size of L is linear in the number of variables, the reflexive closure can be computed in time linear in |L|, the transitive closure in time cubic in |L|, see [32]. The equivalence relation ∼ is the intersection of ≤ restricted to L ′ and its inverse (quadratic in |L ′ |); from it we can obtain the partition L ′ /∼ in linear time in |L ′ | ≤ |L|, including the cardinalities of the equivalence classes and their minimization. Similarly, the remaining sets from the proof (V 0 , V 1 , their intersection and union, and thus also L ′ ) can be computed with polynomial time complexity.
Correctness: The pairs in R arise from interpreting the atomic constraints in ϕ as implications. By transitivity of implication, the inequality u ≤ v for literals u, v means that every model m of ϕ satisfies the implication u → v or, equivalently, m(u) ≤ m(v). In particular, x ≤ ¬x implies m(x) = 0 and ¬x ≤ x implies m(x) = 1. Therefore V 0 can be seen to be the set of variables that have to be false in every model of ϕ, and V 1 the set of variables true in every model. If V 0 ∩ V 1 = ∅ holds then the formula ϕ is inconsistent and has no solution. If V 0 ∪ V 1 = V holds, then every variable has a unique fixed value, hence ϕ has only one solution. Otherwise the formula is consistent and not all variables are fixed, hence there are at least two models.
To determine the minimal number of variables, whose values can be flipped between any two models of ϕ, it suffices to consider the literals without fixed value, L ′ . If we have u ≤ v and v ≤ u, the literals are equivalent, u ∼ v, and must have the same value in every model. This means that any two distinct models have to differ on all literals of at least one equivalence class in L ′ /∼. Therefore, the return value of the algorithm is a lower bound for the minimal distance.
To prove that the return value can indeed be attained, we exhibit two models m 0 = m 1 of ϕ having the least cardinality of any equivalence class in L ′ /∼ as their Hamming distance. Let L ∈ L ′ /∼ be a class of minimum cardinality. Define m 0 (u) := 0 and m 1 (u) := 1 for all literals u ∈ L. We extend this by setting m 0 (w) := m 1 (w) := 0 for all w ∈ L such that w ≤ u for some u ∈ L, and by m 0 (w) := m 1 (w) := 1 for all w ∈ L such that u ≤ w for some u ∈ L. For variables v ∈ V satisfying v ≤ ¬v or ¬v ≤ v we have v ∈ V 0 ∪ V 1 , and thus v / ∈ L ′ ; in other words, for [v] ∼ ∈ L ′ /∼ the classes [v] ∼ and [¬v] ∼ are incomparable. Thus, so far, we have not defined m 0 and m 1 on a variable v ∈ V and on its negation ¬v at the same time. Of course, fixing a value for a negative literal ¬v implicitly means that we bind the assignment for v ∈ V to the opposite value.
It remains to fix the value of literals in L ′ that are neither related to the literals in L nor have fixed values in all models. Suppose (ū, v) ∈ R is a constraint such that the value of at least one literal has not yet been defined. There are three cases: either both literals have not yet received a value, orū is undefined and v has been assigned the value 1 (either as a fixed value in all models or because of being greater than a literal in L or because of being lesser than a complement of a literal in L), or v is undefined andū has been assigned the value 0 (either as a fixed value in all models or because of being smaller than a literal in L or greater than a complement of a literal in L). All three cases can be handled by defining both models, m 0 and m 1 , on the remaining variables identically: starting with a minimal literal u, where m 0 and m 1 are not yet defined, we assign m 0 (u) := m 1 (u) := 0 and m 1 (u) := m 0 (u) := 1. This way none of the constraints is violated, and m 0 and m 1 are distinct only on variables corresponding to literals in L. Iterate this procedure until all variables (and their complements) have been assigned values. If L ′′ ⊆ L ′ denotes the literals remaining after propagating the values of m 0 and m 1 on L, then the presented method can be implemented by partitioning L ′′ into two classes L 0 and L 1 such that L 0 ∩ {u, u} is a singleton for every u ∈ L ′′ and each weakly connected component of the quasiordered set (L ′′ , ≤) is either a subset of L 0 or L 1 . Then set m 0 and m 1 to k on the literals belonging to L k for k ∈ {0, 1}.
By construction, m 0 differs from m 1 only in the variables corresponding to the literals in L, so their Hamming distance is |L| as desired. Moreover, both assignments respect the order constraints in (L, ≤). As these faithfully reflect all original atomic constraints, m 0 and m 1 are indeed models of ϕ.
Algorithm HORN MSD Input: A Horn formula ϕ given as a set of Horn clauses (cf. the plain base of iE 2 given in [12]). Output: "≤ 1 model" or the minimal Hamming distance of any two distinct models of ϕ.

Method:
For each variable x in ϕ, add the clause (¬x ∨ x). Let U := ∅. Apply the following rules to ϕ until no more clauses and literals can be removed and no new clauses can be added.
Unit resolution and unit subsumption: Letū denote the complement of a literal u. If the clause set contains a unit clause u, remove all clauses containing the literal u and remove all literalsū from the remaining clauses. Add u to the set U .
Hyper-resolution with binary implications: Resolve all negative literals of a clause simultaneously with binary implications possessing identical premises.
If U contains a literal for every variable in ϕ, return "≤ 1 model".
If ϕ contains a variable that appears neither in D nor in U , return 1 as minimal Hamming distance.
Otherwise, let V be the set of variables occurring in D, and let ∼ ⊆ V 2 be the relation defined by x ∼ y if {¬x ∨ y, ¬y ∨ x} ⊆ D. Note that ∼ is an equivalence, since the tautological clauses ensure reflexivity and resolution of implications computes their transitive closure. We say that a variable z depends on variables y 1 , . . . , y k , if D contains the clauses ¬y 1 ∨ · · · ∨ ¬y k ∨ z, ¬z ∨ y 1 , . . . , ¬z ∨ y k and z ∼ y i holds for all i = 1, . . . , k.
Return min{|X| | X ∈ V/∼, X does not contain dependent variables} as minimal Hamming distance.

End of algorithm
Complexity: The run-time of the algorithm is polynomial in the number of clauses in ϕ: Unit resolution/subsumption can be applied at most once for each variable, and hyper-resolution has to be applied at most once for each variable x and each clause ¬y 1 ∨ · · · ∨ ¬y k ∨ z and ¬y 1 ∨ · · · ∨ ¬y k .
Correctness: Adding resolvents and removing subsumed clauses maintains logical equivalence, therefore D ∪ U is logically equivalent to ϕ, i.e., both clause sets have the same models. We note that the sets of variables of U and of D are disjoint. The unit clauses in U are always (uniquely) satisfiable, thus D and ϕ are equisatisfiable. Therefore, if D contains the empty clause, ϕ is also unsatisfiable; otherwise D is satisfiable, e.g., by assigning 0 to every x ∈ V. In this case, if U contains a literal for every variable of ϕ, the unit clauses in U define a unique model of ϕ.
Otherwise ϕ has at least two models m 1 = m 2 . In the simplest case some variable x in ϕ has been left unconstrained by D and U ; in this case we can pick any model of D and U and extend it to two different models of ϕ with Hamming distance 1 by setting m 1 (x) = 0 and m 2 (x) = 1 and setting m 1 (y) = m 2 (y) = 0 for any other variable y outside D and U . For the remaining situations it is sufficient to consider the models of D only, as each model m of D uniquely extends to a model of ϕ by defining m(x) = 1 for (x) ∈ U and m(x) = 0 for (¬x) ∈ U ; hence the minimal Hamming distances of the models of ϕ and D will be the same.
We are thus looking for models m 1 , m 2 of D such that the size of the difference set ∆(m 1 , m 2 ) = {x | m 1 (x) = m 2 (x)} is minimal. In fact, since the models of Horn formulas are closed under minimum, we may assume m 1 < m 2 , i.e., we have m 1 (x) = 0 and m 2 (x) = 1 for all variables x ∈ ∆(m 1 , m 2 ). Indeed, given two models m 2 and m ′ 2 of D where neither m 2 ≤ m ′ 2 nor m ′ 2 ≤ m 2 , m 1 = m 2 ∧ m ′ 2 is also a model, and it is distinct from m 2 . Since hd(m 1 , m 2 ) ≤ hd(m 2 , m ′ 2 ), the minimal Hamming distance will occur between models m 1 and m 2 satisfying m 1 < m 2 .
Note the following facts regarding the equivalence relation ∼ and dependent variables.
• If x ∼ y then the two variables must have the same value in every model of D in order to satisfy the implications ¬x ∨ y and ¬y ∨ x. This means that for all models m of D and all X ∈ V/∼, we have either m(x) = 0 for all x ∈ X or m(x) = 1 for all x ∈ X.
• The dependence of variables is acyclic: If, for some l ≥ 2, for every 1 ≤ i < l we have that z i depends on variables including one, say y i , which is equivalent to z i+1 , and z l = z 1 , then there is a cycle of binary implications between the variables and thus z i ∼ y i ∼ z j for all i, j, contradicting the definition of dependence.
• If a variable z depending on y 1 , . . . , y k belongs to a difference set ∆(m 1 , m 2 ), then at least one of the y i s also has to belong to ∆(m 1 , m 2 ): m 2 (z) = 1 implies m 2 (y j ) = 1 for all j = 1, . . . , k (because of the clauses ¬z ∨ y i ), and m 1 (z) = 0 implies m 1 (y i ) = 0 for at least one i (because of the clause ¬y 1 ∨ · · · ∨ ¬y k ∨ z). Therefore ∆(m 1 , m 2 ) is the union of at least two sets in V/∼, namely the equivalence class of z and the one of y i .
• If some z 1 ∈ ∆(m 1 , m 2 ) is equivalent to a variable z ′ 1 that depends on some other variables, then we have a variable z 2 among them, which also belongs to ∆(m 1 , m 2 ). If the equivalence class of z 2 still contains a variable z ′ 2 depending on other variables, we can iterate this procedure. In this way we obtain a sequence z 1 ∼ z ′ 1 , z 2 ∼ z ′ 2 , z 3 ∼ z ′ 3 , . . . where z ′ i depends on variables including z i+1 , which is equivalent to z ′ i+1 . Because there are only finitely many variables and because of acyclicity, after a linear number of steps we must reach a variable z n ∈ ∆(m 1 , m 2 ) such that its equivalence class (being a subset of the difference set) does not contain any dependent variables.
Hence the difference between any two models cannot be smaller than the cardinality of the smallest set in V/∼ without dependent variables. It remains to show that we can indeed find two such models.
Let X be a set in V/∼ which has minimal cardinality among the sets without dependent variables, and let m 0 , m 1 be interpretations defined as follows: (1) m 0 (y) = 0 and m 1 (y) = 1 if y ∈ X; (2) m 0 (y) = 1 and m 1 (y) = 1 if y / ∈ X and (¬x ∨ y) ∈ D for some x ∈ X; (3) m 0 (y) = 0 and m 1 (y) = 0 otherwise. We have to show that m 0 and m 1 satisfy all clauses in D. Let m be any of these models. D contains two types of clauses.
Type 1: Horn clauses with a positive literal ¬y 1 ∨ · · · ∨ ¬y k ∨ z. If m(y i ) = 0 for any i, we are done. So suppose m(y i ) = 1 for all i = 1, . . . , k; we have to show m(z) = 1. The condition m(y i ) = 1 means that either y i ∈ X (for m = m 1 ) or that there is a clause (¬x i ∨ y i ) ∈ D for some x i ∈ X. We distinguish the two cases z ∈ X and z / ∈ X. Let z ∈ X. If z ∼ y i for any i, we are done for we have m(z) = m(y i ) = 1. So suppose z ∼ y i for all i. As the elements in X, in particular z and the x i s, are equivalent and the binary clauses are closed under resolution, D contains the clause ¬z ∨ y i for all i. But this would mean that z is a variable depending on the y i s, contradicting the assumption z ∈ X. Let z / ∈ X, and let x ∈ X. As the elements in X are equivalent and the binary clauses are closed under resolution, D contains ¬x∨y i for all i. Closure under hyper-resolution with the clause ¬y 1 ∨· · ·∨¬y k ∨z means that D also contains ¬x ∨ z, whence m(z) = 1.
Type 2: Horn clauses with only negative literals ¬y 1 ∨ · · · ∨ ¬y k . If m(y i ) = 0 for any i, we are done. It remains to show that the assumption m(y i ) = 1 for all i = 1, . . . , k leads to a contradiction. The condition m(y i ) = 1 means that either y i ∈ X (for m = m 1 ) or that there is a clause (¬x i ∨y i ) ∈ D for some x i ∈ X. Let x be some particular element of X. Since the elements in X are equivalent and the binary clauses are closed under resolution, D contains the clause ¬x ∨ y i for all i. But then a hyperresolution step with the clause ¬y 1 ∨ · · · ∨ ¬y k would yield the unit clause ¬x, which by construction does not occur in D. Therefore at least one y i is neither in X nor part of a clause ¬x ∨ y i with x ∈ X, i.e., m(y i ) = 0.

Two Solution Satisfiability
In this section we study the feasibility problem of MSD(Γ) which is, given a Γ-formula ϕ, to decide if ϕ has two distinct solutions.
Problem: TwoSolutionSAT(Γ) Input: A conjunctive formula ϕ over the relations from the constraint language Γ. Question: Are there two satisfying assignments m = m ′ of ϕ?
A priori it is not clear that the tractability of TwoSolutionSAT is fully characterized by co-clones. The problem is that the implementation of relations of some language Γ by another language Γ ′ might not be parsimonious, that is, in the implementation one solution to a constraint might be blown up into several ones in the implementation. Fortunately we can still determine the tractability frontier for TwoSolutionSAT by combining the corresponding results for SAT and AnotherSAT.
Proof. Since SAT(Γ) is NP-hard, there must be a relation R in Γ having more than one tuple, because every relation containing only one tuple is at the same time Horn, dual Horn, bijunctive, and affine. Given an instance ϕ for SAT(Γ), construct ϕ ′ as ϕ∧R(y 1 , . . . , y ℓ ) where ℓ is the arity of R and y 1 , . . . , y ℓ are new variables not appearing in ϕ. Obviously, ϕ has a solution if and only if ϕ ′ has at least two solutions. Hence, we have proved SAT(Γ) ≤ m TwoSolutionSAT(Γ).
Lemma 51. Let Γ be a constraint language for which AnotherSAT(Γ) is NP-hard. Then the problem TwoSolutionSAT(Γ) is NP-hard.
Proof. Let a Γ-formula ϕ and a satisfying assignment m be an instance of AnotherSAT(Γ). Then ϕ has a solution other than m if and only if it has two distinct solutions.
Lemma 52. Let Γ be a constraint language for which both SAT(Γ) and AnotherSAT(Γ) are in P. Then TwoSolutionSAT is also in P.
Proof. Let ϕ be an instance of TwoSolutionSAT(Γ). All polynomial-time decidable cases of SAT(Γ) are constructive, i.e., whenever that problem is polynomial-time decidable, there exists a polynomialtime algorithm computing a satisfying assignment provided it exists. If ϕ is not satisfiable, we reject the instance. Otherwise, we can compute in polynomial time a satisfying assignment m of ϕ. Now use the algorithm for AnotherSAT(Γ) on the instance (ϕ, m) to decide if there is a second solution to ϕ.
Proposition 54. Let Γ be a constraint language for which TwoSolutionSAT(Γ) is in P. Then there is a polynomial-time n-approximation algorithm for MSD(Γ), where n is the number of variables of the Γ-formula on input.
Proof. Since TwoSolutionSAT(Γ) is in P, both SAT(Γ) and AnotherSAT(Γ) must be in P by Corollary 53. Since SAT(Γ) is in P, we can compute a model m of the input ϕ in polynomial time if it exists. Now we check the AnotherSAT(Γ)-instance (ϕ, m). If it has a solution m ′ = m, it is also polynomial time computable, and we return (m, m ′ ). If we fail somewhere in this process, then the MSD(Γ)-instance ϕ does not have feasible solutions; otherwise, hd(m, m ′ ) ≤ n ≤ n · OPT(ϕ).

Concluding Remarks
The problems investigated in this paper are quite natural. In the space of bit-vectors we search for a solution of a formula that is closest to a given point, or for a solution next to a given solution, or for two solutions witnessing the smallest Hamming distance between any two solutions. Our results describe the complexity of exploring the solution space for arbitrary families of Boolean relations. Moreover, our problems generalize problems familiar from the literature: MinOnes, NearestCodeword, and DistanceSAT are instances of our NearestSolution, while MinDistance is the same as our problem MinSolutionDistance when restricting the latter to affine relations.
To prove the results, we first had to extend the notion of AP-reduction. The optimization problems considered in the literature have the property that each instance has at least one feasible solution. This is not the case when looking for nearest solutions regarding a given solution or a prescribed Boolean tuple, as a formula may have just a single solution or no solution at all. Therefore we had to refine the notion of AP-reductions such that it correctly handles instances without feasible solutions.
The complexity of NearestSolution can be classified by the usual approach: We first show that for each constraint language the complexity of the problem does not change when admitting existential quantifiers and equality, and then check all finitely related clones according to Post's lattice. This approach does not work for the problems NearestOtherSolution and MinSolutionDistance: It does not seem to be possible to show a priori that the complexity remains unaffected under such language extensions. In principle the complexity of a problem might well differ for two constraint languages Γ 1 and Γ 2 that represent the same clone ( Γ 1 = Γ 2 ) but that differ with respect to partial polymorphisms ( Γ 1 ∪ {≈} ∧ = Γ 2 ∪ {≈} ∧ ). Theorems 4 and 5 finally show that this is not the case, but we learn this only a posteriori. Our method of proof fundamentally relies on irredundant weak bases that seem to be the perfect fit for such a situation: a priori compatibility with existential quantification is not required, but it will follow once the proof succeeds just using weak bases. Figure 4 compares the complexity classifications of the three problems. Regarding NearestSolution and NearestOtherSolution, knowing that an assignment is a solution apparently helps in finding a solution nearby. For expressive constraint languages it is NP-complete to decide whether a feasible solution exists at all; for NearestSolution this requires the existence of at least one satisfying assignment, while the other two problems need even two. Kann proved in [21] that MinOnes(Γ) is NPOPB-complete for Γ = BR, where NPOPB is the class of NPO problems with a polynomially bounded objective function. This result implies that NearestSolution(Γ) is NPOPB-complete for Γ = BR as well. It is unclear whether this result also holds for Γ = iN 2 . It may be possible to find a suitable constraint language Γ ′ satisfying Γ ′ = BR such that MinOnes(Γ ′ ) reduces to NearestOtherSolution(Γ) for iI 0 ⊆ Γ or iI 1 ⊆ Γ , proving thus that NOSol(Γ) is NPOPB-complete for these cases. Likewise, the NPOPB-hardness of MSD(Γ) for iN 2 ⊆ Γ or iI 0 ⊆ Γ or iI 1 ⊆ Γ remains open for the time being.