Abstract
We investigate the complexity of three optimization problems in Boolean propositional logic related to information theory: Given a conjunctive formula over a set of relations, find a satisfying assignment with minimal Hamming distance to a given assignment that satisfies the formula (NearestOtherSolution, NOSol) or that does not need to satisfy it (NearestSolution, NSol). The third problem asks for two satisfying assignments with a minimal Hamming distance among all such assignments (MinSolutionDistance, MSD). For all three problems we give complete classifications with respect to the relations admitted in the formula. We give polynomial time algorithms for several classes of constraint languages. For all other cases we prove hardness or completeness regarding APX, polyAPX, or equivalence to wellknown hard optimization problems.
Introduction
We investigate the solution spaces of Boolean constraint satisfaction problems built from atomic constraints by means of conjunction and variable identification. We study three minimization problems in connection with Hamming distance: Given an instance of a constraint satisfaction problem in the form of a generalized conjunctive formula over a set of atomic constraints, the first problem asks to find a satisfying assignment with minimal Hamming distance to a given assignment (NearestSolution, NSol). Note that for this problem we assume neither that the given assignment satisfies the formula nor that the solution is different from the assignment. The second problem is similar to the first one, but this time the given assignment has to satisfy the formula and we look for another solution with minimal Hamming distance (NearestOtherSolution, NOSol). The third problem is to find two satisfying assignments with minimal Hamming distance among all satisfying assignments (MinSolutionDistance, MSD). Note that the dual problem MaxHammingDistance has been studied in [14].
The NSol problem appears in several guises throughout literature. E.g., a common problem in Artificial Intelligence is to find solutions of constraints close to an initial configuration; our problem is an abstraction of this setting for the Boolean domain. Bailleux and Marquis [4] describe such applications in detail and introduce the decision problem DistanceSAT: Given a propositional formula φ, a partial interpretation I, and a bound k, is there a satisfying assignment differing from I in no more than k variables? It is straightforward to show that DistanceSAT corresponds to the decision variant of our problem with existential quantification (called NSol\(_{\text {pp}}^{\mathrm {d}}\) later on). While [4] investigates the complexity of DistanceSAT for a few relevant classes of formulas and empirically evaluates two algorithms, we analyze the decision and the optimization problem for arbitrary semantic restrictions on the formulas.
Hamming distance also plays an important role in belief revision. The result of revising/updating a formula φ by another formula ψ is characterized by the set of models of ψ that are closest to the models of φ. Dalal [15] selects the models of ψ having a minimal Hamming distance to models of φ to be the models that result from the change.
As is common, we analyze the complexity of our optimization problems modulo a parameter that specifies the atomic constraints allowed to occur in the constraint satisfaction problem. We give a complete classification of the approximation complexity with respect to this parameterization. It turns out that our problems can either be solved in polynomial time, or they are complete for a wellknown optimization class, or else they are equivalent to wellknown hard optimization problems.
Our study can be understood as a continuation of the minimization problems investigated by Khanna et al. in [22], especially that of MinOnes. The MinOnes optimization problem asks for a solution of a constraint satisfaction problem with the minimal Hamming weight, i.e., minimal Hamming distance to the 0vector. Our work generalizes these results by allowing the given vector to be also different from zero.
Our work can also be seen as a generalization of questions in coding theory. In fact, our problem MSD restricted to affine relations is the wellknown problem MinDistance of computing the minimum distance of a linear code. This quantity is of central importance in coding theory, because it determines the number of errors that the code can detect and correct. Moreover, our problem NSol restricted to affine relations is the problem NearestCodeword of finding the nearest codeword to a given word, which is the basic operation when decoding messages received through a noisy channel. Thus our work can be seen as a generalization of these wellknown problems from affine to general relations.
In the case of NearestSolution we are able to apply methods from clone theory, even though the problem turns out to be more intricate than pure satisfiability. The other two problems, however, cannot be shown to be compatible with existential quantification easily, which makes classical clone theory inapplicable. Therefore we have to resort to weak coclones that require only closure under conjunction and equality. In this connection, we apply the theory developed in [28, 29] as well as the minimal weak bases of Boolean coclones from [23].
This paper is structured as follows. Section 2 recalls basic definitions and notions. Section 3 introduces the trilogy of optimization problems studied in this paper, namely Nearest Solution (denoted by NSol), Nearest Other Solution (denoted by NOSol), and Minimum Solution Distance (denoted by MSD), as well as their decision versions. It also states our three main results, i.e., a complete classification of complexity for these optimization problems, depicted in Figs. 1, 2, and 3. Section 4 investigates the (non)applicability of clone theory to our problems. It also provides a duality result for the constraint languages used as parameters. Section 5 contains the proofs of complexity classification results for NearestSolution, Section 6 for NearestOtherSolution, and Section 7 for MinSolutionDistance. Finally, the concluding remarks in Section 8 compare our theorems to previously existing similar results and put our results into perspective.
Preliminaries
Boolean Relations and Relational Clones
An nary Boolean relationR is a subset of {0,1}^{n}; its elements (b_{1},…,b_{n}) are also written as b_{1}⋯b_{n}. Let V be a set of variables. An atomic constraint, or an atom, is an expression R(x), where R is an nary relation and x is an ntuple of variables from V. Let Γ be a nonempty finite set of Boolean relations, also called a constraint language. A (conjunctive) Γformula is a finite conjunction of atoms R_{1}(x_{1}) ∧⋯ ∧ R_{k}(x_{k}), where the R_{i} are relations from Γ and the x_{i} are variable tuples of suitable arity. For technical reasons in connection with reductions we also allow empty conjunctions (k = 0) here. Such formulas elegantly take care of certain marginal cases at the cost of adding only one additional trivial problem instance.
An assignment is a mapping m: V →{0,1} assigning a Boolean value m(x) to each variable x ∈ V. In a given context we can assume V to be finite, by restricting it e.g. to the variables occurring in a formula. If we impose an arbitrary but fixed order on the variables, say x_{1},…,x_{n}, then the assignments can be identified with elements from {0,1}^{n}. The ith component of a tuple m ∈{0,1}^{n} is denoted by m[i] and corresponds to the value of the ith variable, i.e., m[i] = m(x_{i}). The Hamming weight hw(m) = {im[i] = 1} of m is the number of 1s in the tuple m. The Hamming distance hd(m, m^{′}) = {im[i]≠m^{′}[i]} of m and m^{′} is the number of coordinates on which the tuples disagree. The complement \(\overline {m}\) of a tuple m is its pointwise complement, \(\overline {m}[i] = 1 m[i]\).
An assignment m satisfies a constraint R(x_{1},…,x_{n}) if (m(x_{1}),…,m(x_{n})) ∈ R holds. It satisfies the formula φ if it satisfies all its atoms; m is said to be a model or solution of φ in this case. We use [φ] to denote the set of models of φ. For a term t, [t] is the set of assignments for which t evaluates to 1. Note that [φ] and [t] represent Boolean relations. If the variables of φ are not explicitly enumerated in parentheses as parameters, they are implicitly considered to be ordered lexicographically. In sets of relations represented this way we usually omit the brackets. A literal is a variable v, or its negation ¬v. Assignments are extended to literals by defining m(¬v) = 1 − m(v).
Table 1 defines Boolean functions and relations needed later on, in particular exclusive or [x ⊕ y], notallequal nae^{3}, kary disjunction or^{k}, and kary negated conjunction nand^{k}.
Throughout the text we refer to different types of Boolean constraint relations following Schaefer’s terminology [27] (see also the monograph [11] and the survey [9]). A Boolean relation R is (1) 1valid if 1⋯1 ∈ R and 0valid if 0⋯0 ∈ R, (2) Horn (dual Horn) if R can be represented by a formula in conjunctive normal form (CNF) with at most one unnegated (negated) variable per clause, (3) monotone if it is both Horn and dual Horn, (4) bijunctive if it can be represented by a CNF formula with at most two literals per clause, (5) affine if it can be represented by an affine system of equations Ax = b over \(\mathbb {Z}_{2}\), (6) complementive if for each m ∈ R also \(\overline {m} \in R\), (7) implicative hitting setbounded+ with bound k (denoted by kIHSB^{+}) if R can be represented by a CNF formula with clauses of the form (x_{1} ∨⋯ ∨ x_{k}), (¬x ∨ y), x, and ¬x, (8) implicative hitting setbounded− with bound k (denoted by kIHSB^{−}) if R can be represented by a CNF formula with clauses of the form (¬x_{1} ∨⋯ ∨¬x_{k}), (¬x ∨ y), x, and ¬x. A set Γ of Boolean relations is called 0valid (1valid, Horn, dual Horn, monotone, affine, bijunctive, complementive, kIHSB^{+}, kIHSB^{−}) if every relation in Γ is 0valid (1valid, Horn, dual Horn, monotone, affine, bijunctive, complementive, kIHSB^{+}, kIHSB^{−}).
A formula constructed from atoms by conjunction, variable identification, and existential quantification is called a primitive positive formula (ppformula). If φ is such a formula, we write again [φ] for its set of models, i.e., the Boolean relation defined by φ. As above the coordinates of this relation are understood to be the variables of φ in lexicographic order, unless otherwise stated by explicit enumeration. We denote by 〈Γ〉 the set of all relations that can be expressed using relations from Γ ∪{≈}, conjunction, variable identification (and permutation), cylindrification, and existential quantification, i.e., the set of all relations that are primitive positively definable from Γ and equality. The set 〈Γ〉 is called the coclone generated by Γ. A base of a coclone \(\mathcal {B}\) is a set of relations Γ such that \(\langle {\Gamma }\rangle = \mathcal {B}\), i.e., just a generating set with regard to primitive positive definability including equality. Note that traditionally (e.g. [18]), the notion of base also involves minimality with respect to set inclusion. Our use of the term base is in accordance with [10], where finite bases for all Boolean coclones have been determined. Some of these are listed in Table 2. The sets of relations being 0valid, 1valid, complementive, Horn, dual Horn, affine, bijunctive, 2affine (both bijunctive and affine), monotone, kIHSB^{+}, and kIHSB^{−} each form a coclone denoted by iI_{0}, iI_{1}, iN_{2}, iE_{2}, iV_{2}, iL_{2}, iD_{2}, iD_{1}, iM_{2}, \(\text {iS}_{00}^{k}\), and \(\text {iS}_{10}^{k}\), respectively; see Table 3.
We will also use a weaker closure than 〈Γ〉, called conjunctive closure and denoted by 〈Γ〉_{∧}, where the constraint language Γ is closed under conjunctive definitions, but not under existential quantification or addition of explicit equality constraints.
Sets of relations of the form W = 〈W ∪{≈}〉_{∧} are called weak systems and are in a onetoone correspondence with socalled strong partial clones [26]. It is a wellknown consequence of the Galois theory developed in [26] that for every coclone 〈Γ^{′}〉 whose corresponding clone is finitely generated (this presents no restriction in the Boolean case), there is a largest partial clone whose total part coincides with that clone, cf. [24, Theorem 20.7.2] or see [28, Theorems 4.6, 4.7, 4.11] for a proof in the Boolean case. This largest partial clone even is a strong partial clone, and hence, there is a least weak system W under inclusion such that 〈W〉 = 〈Γ^{′}〉. Any finite weak generating set Γ of this weak system W, i.e., W = 〈Γ∪{≈}〉_{∧}, is called a weak base of 〈Γ^{′}〉, see [28, Definition 4.2]. Such a set Γ, in particular, is a finite base of the coclone 〈Γ^{′}〉. Finally, to get from the closure operator 〈Γ∪{≈}〉_{∧} (which is hard to handle in the context of our problems) to 〈Γ〉_{∧} (which is easy to handle), one needs the notion of irredundancy. A relation R is called irredundant, if it has neither duplicate nor fictitious coordinates. It can be observed from the proofs of Proposition 5.2 and Corollary 5.6 in [28] or from [29, Proposition 3.11], that R ∈〈Γ∪{≈}〉_{∧} implies R ∈〈Γ〉_{∧} for any irredundant relation R. Following Schnoor [29, p. 30], we call a weak base of 〈Γ^{′}〉 consisting exclusively of irredundant relations an irredundant weak base. Thus, if Γ is an irredundant weak base of 〈Γ^{′}〉, then the minimality of the weak system W = 〈Γ∪{≈}〉_{∧} implies that Γ ⊆ W ⊆〈Γ^{′}∪{≈}〉_{∧} (cf. [28, Corollary 4.3]), and thus Γ ⊆〈Γ^{′}〉_{∧} because of irredundancy. Hence, we obtain the following useful tool.
Theorem 1 (Schnoor [29, Corollary 3.12])
If Γ is an irredundant weak base of a cocloneiC, e.g.a minimal weak base of iC, then Γ ⊆〈Γ^{′}〉_{∧}holds for any base Γ^{′}of iC.
According to Lagerkvist [23], a minimal weak base is an irredundant weak base satisfying an additional minimality property that ensures small cardinality. The utility of Theorem 1 comes in particular from the fact that Lagerkvist determined minimal weak bases for all finitely generated Boolean coclones in [23]. For our purposes we note that each of the coclones iV, iV_{0}, iV_{1}, iV_{2}, iN, iN_{2}, and iI is generated by a minimal weak base consisting of a single relation (Table 4).
Another source of weak base relations without duplicate coordinates comes from the following construction: let χ_{n} be the 2^{n}ary relation that is given by the value tables (in some chosen enumeration) of the n distinct nary projection functions. More formally, let β: 2^{n} →{0,1}^{n} be the reader’s preferred bijection between the index set 2^{n} = {0,…,2^{n− 1}} and the set of all arguments of an nary Boolean function—often lexicographic enumeration is chosen here for presentational purposes, but the order of enumeration of the ntuples does not matter as long as it remains fixed. Then χ_{n} = {e_{i} ∘ β1 ≤ i ≤ n} where e_{i}: {0,1}^{n} →{0,1} denotes the projection function onto the ith coordinate. Let C be a clone with corresponding coclone iC. Since iC is closed with respect to intersection of relations of identical arity, for any kary relation R, there is a least kary relation in iC containing R, scilicet \({C}\circ \langle {R}\rangle :=\bigcap \{R^{\prime }\in \mathrm {i}\mathit {C}  {R^{\prime }\supseteq R, R^{\prime }~k\text {ary}}\}\). Traditionally, e.g. [24, Sect. 2.8, p. 134] or [25, Definition 1.1.16, p. 48], this relation is denoted by Γ_{C}(R), but here we have chosen a different notation to avoid confusion with constraint languages. It is well known, e.g. [25, Satz 1.1.19(i), p. 50], and easy to see that C ∘〈R〉 is completely determined by the ℓary part of C whenever ℓ ≥R: given any enumeration of ∅≠R = {r_{1},…,r_{ℓ}} (for technical reasons we have to exclude the case ℓ = 0 in this presentation because we do not consider clones with nullary operations here) we have C ∘〈R〉 = {f ∘ (r_{1},…,r_{ℓ})f ∈ C, fℓary}, where f ∘ (r_{1},…,r_{ℓ}) denotes the rowwise application of f to a matrix whose columns are formed by the tuples r_{1},…,r_{ℓ}. Relations of the form C ∘〈χ_{n}〉 represent the nary part of the clone C as a 2^{n}ary relation and are called nth graphic of C (cf. e.g. [24, p. 133 and Theorem 2.8.1(b)]). Indeed, the previous characterization of C ∘〈χ_{n}〉 yields C∘〈χ_{n}〉 = {f∘(e_{1}∘β,…,e_{n}∘β)f ∈ C, fnary} = {f∘(e_{1},…,e_{n})∘βf ∈ C, fnary} = {f∘βf ∈ C, fnary}. With the help of this description of C ∘〈χ_{n}〉 and standard clone theoretic manipulations, one can easily verify the following result, identifying possible candidates for irredundant singleton weak bases.
Theorem 2 ([28, Theorem 4.11])
LetCbe a clone andR = C ∘〈{r_{1},…,r_{n}}〉 withn ≥ 1, thenC ∘〈χ_{n}〉 gives a singleton weak base of 〈{R}〉 without duplicate coordinates.
Approximability, Reductions, and Completeness
We assume that the reader has a basic knowledge of approximation algorithms and complexity theory. We recall some basic notions of approximation algorithms and complexity theory; for details see the monographs [3, 11].
A combinatorial optimization problem\(\mathcal {P}\) is a quadruple (I,sol,obj,goal), where:

I is the set of admissible instances of \(\mathcal {P}\).

sol(x) denotes the set of feasible solutions for every instance x ∈ I.

obj(x, y) denotes the nonnegative integer measure of y for every instance x ∈ I and every feasible solution y ∈sol(x); obj is also called objective function.

goal ∈{min,max} denotes the optimization goal for \(\mathcal {P}\).
A combinatorial optimization problem is said to be an NPoptimization problem (NPOproblem) if

the instances and solutions are recognizable in polynomial time,

the size of the solutions in sol(x) is polynomially bounded in the size of x, and

the objective function obj is computable in polynomial time.
The optimal value of the objective function for the solutions of an instance x is denoted by OPT(x). In our case the optimization goal will always be minimization, i.e., OPT(x) will be the minimum.
Given an instance x ∈ I with a feasible solution y ∈sol(x) and a real number r ≥ 1, we say that y is rapproximate if obj(x, y) ≤ r OPT(x) holds and our goal is minimization, or obj(x, y) ≥OPT(x)/r and we consider a maximization problem.
Let A be an algorithm that for any instance x of \(\mathcal {P}\) such that sol(x)≠∅ returns a feasible solution A(x) ∈sol(x). Given an arbitrary function \(r\colon \mathbb {N} \to [1,\infty )\), we say that A is an r(n)approximate algorithm for \(\mathcal {P}\) if for any instance x ∈ I having feasible solutions the algorithm returns an r(x)approximate solution, where x is the size of x. If an NPO problem \(\mathcal {P}\) admits an r(n)approximate polynomialtime algorithm, we say that \(\mathcal {P}\) is approximable within r(n).
An NPO problem \(\mathcal {P}\) is in the class PO if the optimum is computable in polynomial time (i.e. if \(\mathcal {P}\) admits a 1approximate polynomialtime algorithm). \(\mathcal {P}\) is in the class APX (polyAPX) if it is approximable within a constant (polynomial) function in the size of the instance x. NPO is the class of all NPO problems and NPOPB is the class of all NPO problems where the objective function is polynomially bounded. The following inclusions hold for these approximation complexity classes: PO ⊆APX ⊆ polyAPX ⊆NPO. All inclusions are strict unless P = NP.
For reductions among decision problems we use the polynomialtime manyone reduction denoted by ≤_{m}. Manyone equivalence between decision problems is denoted by ≡_{m}. For reductions among optimization problems we use approximation preserving reductions, also called APreductions, denoted by ≤_{AP}. APequivalence between optimization problems is denoted by ≡_{AP}.
We say that an optimization problem \(\mathcal {P}\) APreduces to another optimization problem \(\mathcal {Q}\), denoted \(\mathcal {P} \le _{\text {AP}} \mathcal {Q}\), if there are two polynomialtime computable functions f and g and a real constant α ≥ 1 such that for all r > 1 and all \(\mathcal {P}\)instances x the following conditions hold.

f(x) is a \(\mathcal {Q}\)instance or the generic unsolvable instance ⊥ (which is not part of \(\mathcal {Q}\)).

If x admits feasible solutions, then f(x) is different from ⊥ and also admits feasible solutions.

For any feasible solution y^{′} of f(x), g(x, y^{′}) is a feasible solution of x.

If y^{′} is an rapproximate solution of the \(\mathcal {Q}\)instance f(x), then g(x, y^{′}) is an (1 + (r − 1)α + o(1))approximate solution of the \(\mathcal {P}\)instance x, where o(1) refers to the size of x.
Our definition of APreducibility slightly extends the one in [3] by introducing a generic unsolvable instance ⊥. This extension allows us to reduce problems with unsolvable instances to such without as long as the unsolvable instances can be detected in polynomial time, by making f map the unsolvable instances to ⊥. This practice has been implicit in previous work, e.g. [22].
We also need a slightly nonstandard variation of APreductions. We say that an optimization problem \(\mathcal {P}\) APTuringreduces to another optimization problem \(\mathcal {Q}\) if there is a polynomialtime oracle algorithm A and a constant α ≥ 1 such that for all r > 1 on any input x for \(\mathcal {P}\)

if all oracle calls with a \(\mathcal {Q}\)instance x^{′} are answered with a feasible \(\mathcal {Q}\)solution y for x^{′}, then A outputs a feasible \(\mathcal {P}\)solution for x, and

if for every call the oracle answers with an rapproximate solution, then A computes a (1 + (r − 1)α + o(1))approximate solution for the \(\mathcal {P}\)instance x.
It is straightforward to check that APTuringreductions are transitive. Moreover, if \(\mathcal {P}\) APTuringreduces to \(\mathcal {Q}\) with constant α and \(\mathcal {Q}\) has an r(n)approximation algorithm, then there is an αr(n)approximation algorithm for \(\mathcal {P}\).
We will relate our problems to wellknown optimization problems, by calling the problem \(\mathcal {P}\) under investigation \(\mathcal {Q}\)complete if \(\mathcal {P}\equiv _{\text {AP}} \mathcal {Q}\). This notion of completeness is stricter than the one in [22], since the latter relies on Areductions. For \(\mathcal {Q}\), we will consider the following optimization problems analyzed in [22].
 Problem :

MinOnes(Γ)
 Input: :

A conjunctive formula φ over relations from Γ.
 Solution: :

An assignment m satisfying φ.
 Objective: :

Minimum Hamming weight hw(m).
 Problem :

WeightedMinOnes(Γ)
 Input: :

A conjunctive formula φ over relations from Γ and a weight function \(w\colon V \to \mathbb {N}\) assigning nonnegative integer weights to the variables of φ.
 Solution: :

An assignment m satisfying φ.
 Objective: :

Minimum value \(\sum _{x: m(x)= 1}w(x)\).
We now define some wellstudied problems to which we will relate our problems. Note that these problems do not depend on any parameter.
 Problem :

NearestCodeword
 Input: :

A matrix \(A \in \mathbb {Z}_{2}^{k\times l}\) and a vector \(m\in {\mathbb {Z}_{2}^{l}}\).
 Solution: :

A vector \(x\in {\mathbb {Z}_{2}^{k}}\).
 Objective: :

Minimum Hamming distance hd(xA, m).
 Problem :

MinDistance
 Input: :

A matrix \(A\in \mathbb {Z}_{2}^{k\times l}\).
 Solution: :

A nonzero vector \(x \in {\mathbb {Z}_{2}^{l}}\) with Ax = 0.
 Objective: :

Minimum Hamming weight hw(x).
 Problem :

MinHornDeletion
 Input: :

A conjunctive formula φ over relations from {x ∨ y ∨¬z, x,¬x}.
 Solution: :

An assignment m to φ.
 Objective: :

Minimum number of unsatisfied conjuncts of φ.
NearestCodeword, MinDistance and MinHornDeletion are known to be NPhard to approximate within a factor \(2^{\Omega (\log ^{1\varepsilon }(n))}\) for every ε > 0 [1, 16, 22]. Thus if a problem \(\mathcal {P}\) is equivalent to any of these problems, it follows that \(\mathcal {P} \notin \text {APX}\) unless P = NP.
Satisfiability
We also use the classic problem SAT(Γ) asking for the satisfiability of a given conjunctive formula over a constraint language Γ. Schaefer [27] completely classified its complexity. SAT(Γ) is polynomialtime decidable if Γ is 0valid (Γ ⊆iI_{0}), 1valid (Γ ⊆iI_{1}), Horn (Γ ⊆iE_{2}), dual Horn (Γ ⊆iV_{2}), bijunctive (Γ ⊆iD_{2}), or affine (Γ ⊆iL_{2}); otherwise it is NPcomplete. Moreover, we need the decision problem AnotherSAT(Γ): Given a conjunctive formula over Γ and a satisfying assignment m, is there another satisfying assignment m^{′} different from m? The complexity of this problem was completely classified by Juban [20]. AnotherSAT(Γ) is polynomialtime decidable if Γ is both 0 and 1valid (Γ ⊆iI), complementive (Γ ⊆iN_{2}), Horn (Γ ⊆iE_{2}), dual Horn (Γ ⊆iV_{2}), bijunctive (Γ ⊆iD_{2}), or affine (Γ ⊆iL_{2}); otherwise it is NPcomplete.
Linear and Integer Programming
A unimodular matrix is a square integer matrix having determinant + 1 or − 1. A totally unimodular matrix is a matrix for which every square nonsingular submatrix is unimodular. A totally unimodular matrix need not be square itself. Any totally unimodular matrix has only 0, + 1 or − 1 entries. If A is a totally unimodular matrix and b is an integral vector, then for any given linear functional f such that the linear program min{f(x)Ax ≥b} has a real minimum x, it also has an integral minimum point x. That is, the feasible region {xAx ≥b} is an integral polyhedron. For this reason, linear programming methods can be used to obtain the solutions for integer linear programs in this case. Linear programs can be solved in polynomial time, hence so can integer programs with totally unimodular matrices. For details see the monograph by Schrijver [30].
Results
This section presents the problems we consider and our results; the proofs follow in subsequent sections. The input to all our problems is a conjunctive formula over a constraint language. The satisfying assignments of the formula, i.e. its models or solutions, form a Boolean relation that can be understood as an associated generalized binary code. As for linear codes, the minimization target is always the Hamming distance between the codewords or models. Our three problems differ in the information additionally available for computing the required Hamming distance.
Given a formula and an arbitrary assignment, the first problem asks for a solution closest to the given assignment.
 Problem :

NearestSolution(Γ), NSol(Γ)
 Input: :

A conjunctive formula φ over relations from Γ and an assignment m to the variables occurring in φ, which is not required to satisfy φ.
 Solution: :

An assignment m^{′} satisfying φ (i.e. a codeword of the code described by φ).
 Objective: :

Minimum Hamming distance hd(m, m^{′}).
Note that the problem generalizes the MinOnes problem from [22]. Indeed, if we take the allzero assignment m = 0⋯0 as part of the input, we get exactly the MinOnes problem as a special case.
Theorem 3 (illustrated in Fig. 1)
For a given Boolean constraintlanguage Γ theoptimization problemNSol(Γ) is

(i)
inPO if Γ is

(a)
2affine (Γ ⊆iD_{1}) or

(b)
monotone (Γ ⊆iM_{2});

(a)

(ii)
APXcompleteif

(a)
Γ generates iD_{2} (〈Γ〉 = iD_{2}),or

(b)
[x ∨ y] ∈〈Γ〉 and Γ iskIHSB^{+}\(({\text {iS}_{0}^{2}} \subseteq \langle {\Gamma }\rangle \subseteq \text {iS}_{00}^{k})\)forsome\(k \in \mathbb {N}\),k ≥ 2,or

(c)
[¬x ∨¬y] ∈〈Γ〉 and Γ iskIHSB^{−}\(({\text {iS}_{1}^{2}} \subseteq \langle {\Gamma }\rangle \subseteq \text {iS}_{10}^{k})\)forsome\(k \in \mathbb {N}\),k ≥ 2;

(a)

(iii)
NearestCodewordcomplete if Γ is exactly affine (iL ⊆〈Γ〉⊆iL_{2});

(iv)
MinHornDeletioncompleteif Γ is

(a)
exactly Horn (iE ⊆〈Γ〉⊆iE_{2}) or

(b)
exactly dual Horn (iV ⊆〈Γ〉⊆iV_{2});

(a)

(v)
polyAPXcomplete if Γ does not contain an affine relation and it is

(a)
0valid(iN ⊆〈Γ〉⊆iI_{0}) or

(b)
1valid(iN ⊆〈Γ〉⊆iI_{1});and

(a)

(vi)
otherwise (iN_{2} ⊆〈Γ〉) it is NPcomplete to decide whether a feasible solution forNSol(Γ) exists.
Proof
The proof is split into several propositions presented in Section 5.

(i)
See Propositions 17 and 18.

(ii)
See Propositions 20, 21, and 22.

(iii)
See Corollary 25 and Proposition 26.

(iv)
See Propositions 29 and 30.

(v)
See Proposition 31.

(vi)
See Proposition 19.
Given a constraint and one of its solutions, the second problem asks for another solution closest to the given one.
 Problem :

NearestOtherSolution(Γ), NOSol(Γ)
 Input: :

A conjunctive formula φ over relations from Γ and a satisfying assignment m (to the variables mentioned in φ).
 Solution: :

An assignment m^{′} ≠ m satisfying φ.
 Objective: :

Minimum Hamming distance hd(m, m^{′}).
The difference between the problems NearestSolution and NearestOtherSolution is the knowledge, or its absence, whether the input assignment satisfies the constraint. Moreover, for NearestSolution we may output the given assignment if it satisfies the formula while for NearestOtherSolution we have to output an assignment different from the one given as the input.
Theorem 4 (illustrated in Fig. 2)
For every constraint language Γ the optimization problemNOSol(Γ) is

(i)
inPO if

(a)
Γ is bijunctive (Γ ⊆iD_{2}) or

(b)
Γ iskIHSB^{+}\(({\Gamma }\subseteq \text {iS}_{00}^{k})\)forsome\(k \in \mathbb {N}\),k ≥ 2 or

(c)
Γ iskIHSB^{−}\(({\Gamma }\subseteq \text {iS}_{10}^{k})\)forsome\(k \in \mathbb {N}\),k ≥ 2;

(a)

(ii)
MinDistancecomplete if Γ is exactly affine (iL ⊆〈Γ〉⊆iL_{2});

(iii)
MinHornDeletioncomplete underAPTuringreductionsif Γ is

(a)
exactly Horn (iE ⊆〈Γ〉⊆iE_{2}) or

(b)
exactly dual Horn (iV ⊆〈Γ〉⊆iV_{2});

(a)

(iv)
in polyAPX if Γ is

(a)
exactly both 0validand 1valid(〈Γ〉 = iI) or

(b)
exactly complementive (iN ⊆〈Γ〉⊆iN_{2}),
whereNOSol(Γ) isnapproximable butnot (n^{1−ε})approximablefor anyε > 0 unless P = NP;

(a)

(v)
and otherwise (iI_{0} ⊆〈Γ〉 or iI_{1} ⊆〈Γ〉) it is NPcomplete to decide whether a feasible solution forNOSol(Γ) exists.
Proof
The proof is split into several propositions presented in Section 6.

(i)
See Propositions 33 and 34.

(ii)
See Proposition 44.

(iii)
See Corollary 47.

(iv)
See Propositions 35 and 39.

(v)
See Proposition 35.
The third problem does not take any assignments as input, but asks for two solutions which are as close to each other as possible. We optimize once more the Hamming distance between the solutions.
 Problem :

MinSolutionDistance(Γ), MSD(Γ)
 Input: :

A conjunctive formula φ over relations from Γ.
 Solution: :

Two satisfying truth assignments m≠m^{′} to the variables occurring in φ.
 Objective: :

Minimum Hamming distance hd(m, m^{′}).
The MinSolutionDistance problem enlarges the notion of minimum distance of an error correcting code. The following theorem is a more finegrained analysis of the result published by Vardy in [31], extended to an optimization problem.
Theorem 5 (illustrated in Fig. 3)
For any constraint language Γ the optimization problemMSD(Γ) is

(i)
inPO if Γ is

(a)
bijunctive (Γ ⊆iD_{2}) or

(b)
Horn (Γ ⊆iE_{2}) or

(c)
dual Horn (Γ ⊆iV_{2});

(a)

(ii)
MinDistancecomplete if Γ is exactly affine (iL ⊆〈Γ〉⊆iL_{2});

(iii)
in polyAPX if dup^{3} ∈〈Γ〉 and Γ isboth 0validand 1valid(iN ⊆〈Γ〉⊆iI), whereMSD(Γ) isnapproximable butnot (n^{1−ε})approximablefor anyε > 0 unless P = NP;and

(iv)
otherwise (iN_{2} ⊆〈Γ〉 or iI_{0} ⊆〈Γ〉 or iI_{1} ⊆〈Γ〉) it is NPcomplete to decide whether a feasible solution forMSD(Γ) exists.
Proof
The proof is split into several propositions presented in Section 7.

(i)
See Propositions 48 and 49.

(ii)
See Proposition 55.

(iii)
For Γ ⊆iI, every formula φ over Γ has at least two solutions since it is both 0valid and 1valid. Thus TwoSolutionSAT(Γ) is in P, and Proposition 54 yields that MSD(Γ) is napproximable. By Proposition 56 this approximation is indeed tight.

(iv)
According to [20], AnotherSAT(Γ) is NPhard for iI_{0} ⊆〈Γ〉, or iI_{1} ⊆〈Γ〉. By Lemma 51 it follows that TwoSolutionSAT(Γ) is NPhard, too. For iN_{2} ⊆〈Γ〉 we can reduce the NPhard problem SAT(Γ) to TwoSolutionSAT(Γ). Hence it is NPcomplete to decide whether a feasible solution for MSD(Γ) exists in all three cases.
The three optimization problems can be transformed into decision problems in the usual way. We add an integer bound k to the input and ask if the Hamming distance satisfies the inequality hd(m, m^{′}) ≤ k. This way we obtain the corresponding decision problems NOSol^{d}, NSol^{d}, and MSD^{d}, respectively. Their complexity follows immediately from the theorems above. All cases in PO become polynomialtime decidable, whereas the other cases, which are APXhard, become NPcomplete. This way we obtain dichotomy theorems classifying the decision problems as polynomial or NPcomplete for all sets Γ of relations. We obtain the following dichotomies for each of the respective decision problems.
Corollary 6
For each constraint language Γ

NSol^{d}(Γ) is inP if Γ is 2affine or monotone, and it is NPcomplete otherwise.

NOSol^{d}(Γ) is inP if Γ is bijunctive,kIHSB^{+},orkIHSB^{−},
and it is NPcomplete otherwise.

MSD^{d}(Γ) is inP if Γ is bijunctive, Horn, or dualHorn, and it is NPcomplete otherwise.
Applicability of Clone Theory and Duality
We show that clone theory is applicable to the problem NSol, as well as a possibility to exploit inner symmetries between coclones, which shortens several proofs in the following sections.
Nearest Solution
There are two natural versions of NSol(Γ). In one version the formula φ is quantifier free while in the other one we do allow existential quantification. We call the former version NSol(Γ) and the latter NSol_{pp}(Γ) and show that both versions are equivalent.
Let NSol^{d}(Γ) and \(\textsf {NSol}^{\mathrm {d}}_{\text {pp}}({\Gamma })\) be the decision problems corresponding to NSol(Γ) and NSol_{pp}(Γ), asking whether there is a satisfying assignment within a given bound.
Proposition 7
For any constraint language Γ, we have\(\text {\textsf {NSol}}^{\mathrm {d}}({\Gamma })\equiv _{\mathrm {m}}\text {\textsf {NSol}}^{\mathrm {d}}_{\text {pp}}({\Gamma })\)andNSol(Γ) ≡_{AP}NSol_{pp}(Γ).
Proof
The reduction from left to right is trivial in both cases. For the other direction, consider first an instance of \(\textsf {NSol}^{\mathrm {d}}_{\text {pp}}({\Gamma })\) with formula φ, assignment m, and bound k. Let x_{1},…,x_{n} be the free variables of φ and let y_{1},…,y_{ℓ} be the existentially quantified ones, which we can assume to be disjoint. By discarding variables y_{i} while not changing [φ], we can assume that each variable y_{i} occurs in at least one atom of φ. We construct a quantifierfree formula φ^{′}, where the nonquantified variables of φ get duplicated by a factor λ := (n + ℓ + 1)^{2} such that the effect of quantified variables becomes negligible. For each variable z we define the set B(z) as follows:
For every atom R(z_{1},…,z_{s}) in φ, the quantifierfree formula φ^{′} over the variables \(\bigcup _{i = 1}^{n} B(x_{i}) \cup \bigcup _{i = 1}^{\ell } B(y_{i})\) contains the atom \(R(z_{1}^{\prime }, \ldots , z_{s}^{\prime })\) for every \((z_{1}^{\prime }, \ldots , z_{s}^{\prime })\) from B(z_{1}) ×⋯ × B(z_{s}). Moreover, we construct an assignment B(m) of φ^{′} by assigning to every variable \({x_{i}^{j}}\) the value m(x_{i}) and to y_{i} the value 0. Note that because there is an upper bound on the arities of relations from Γ, this is a polynomial time construction.
We claim that φ has a solution m^{′} with hd(m, m^{′}) ≤ k if and only if φ^{′} has a solution m^{″} with hd(B(m),m^{″}) ≤ kλ + ℓ. First, observe that if m^{′} with the desired properties exists, then there is an extension \(m^{\prime }_{\mathrm {e}}\) of m^{′} to the y_{i} that satisfies all atoms. Define m^{″} by setting \(m^{\prime \prime }({x_{i}^{j}}):= m^{\prime }(x_{i})\) and \(m^{\prime \prime }(y_{i}):= m^{\prime }_{\mathrm {e}}(y_{i})\) for all i and j. Then m^{″} is clearly a satisfying assignment of φ^{′}. Moreover, m^{″} and B(m) differ in at most kλ variables among the \({x_{i}^{j}}\). Since there are only ℓ other variables y_{i}, we get hd(m^{″}, B(m)) ≤ kλ + ℓ as desired.
Now suppose m^{″} satisfies φ^{′} with hd(B(m),m^{″}) ≤ kλ + ℓ. We may assume for each i that \(m^{\prime \prime }({x_{i}^{1}}) = \cdots = m^{\prime \prime }(x_{i}^{\lambda })\). Indeed, if this is not the case, then setting all \({x_{i}^{j}}\) to \(B(m)({x_{i}^{j}})=m(x_{i})\) will result in a satisfying assignment closer to B(m). After at most n iterations we get some m^{″} as desired. Now define an assignment m^{′} for φ by setting \(m^{\prime }(x_{i}):=m^{\prime \prime }({x_{i}^{1}})\). Then m^{′} satisfies φ, because the variables y_{i} can be assigned values as in m^{″}. Moreover, whenever m(x_{i}) differs from m^{′}(x_{i}), the inequality \(B(m)({x_{i}^{j}}) \neq m^{\prime \prime }({x_{i}^{j}})\) holds for every j. Thus we obtain λ hd(m, m^{′}) ≤hd(B(m),m^{″}) ≤ kλ + ℓ. Therefore, we have the inequality hd(m, m^{′}) ≤ k + ℓ/λ and hence hd(m, m^{′}) ≤ k, since ℓ/λ < 1. This completes the manyone reduction.
To see that the construction above is also an APreduction, let m^{″} be an rapproximation for φ^{′} and B(m), i.e., hd(B(m),m^{″}) ≤ r ⋅OPT(φ^{′}, B(m)). Construct m^{′} as before, so λ hd(m, m^{′}) ≤hd(B(m),m^{″}) ≤ r ⋅OPT(φ^{′}, B(m)). Since OPT(φ^{′}, B(m)) ≤ λ OPT(φ, m) + ℓ as above, we get λ hd(m, m^{′}) ≤ r(λ OPT(φ, m) + ℓ). This implies hd(m, m^{′}) ≤ r ⋅OPT(φ, m) + r ⋅ ℓ/λ = (r + o(1)) ⋅OPT(φ, m) and shows that the construction is an APreduction with α = 1.
Remark 8
Note that in the reduction from \(\textsf {NSol}^{\mathrm {d}}_{\text {pp}}({\Gamma })\) to NSol^{d}(Γ) we construct the assignment B(m) as an extension of m by setting all new variables to 0. In particular, if m is the constant 0assignment, then so is B(m). We use this observation as we continue.
The following four technical results are the missing theoretical backbone of [8], which had to be omitted from [8] due to page limitations. The first of these lemmas allows us to consider constraints with disjoint variables independently.
Lemma 9
Letφ(x, y) = ψ(x) ∧ χ(y) be a Γformulaover a constraint language Γ andman assignment over disjoint variable blocksxandy. Let (φ, m) be an instance ofNSol(Γ). Then\(\text {OPT}(\varphi , m) = \text {OPT}(\psi , m\!\!\upharpoonright _{\boldsymbol {x}}) + \text {\text {OPT}}(\chi , m\!\!\upharpoonright _{\boldsymbol {y}})\).
Proof
If s ∈ [φ], then \(s\!\!\upharpoonright _{\boldsymbol {x}} \in [\psi ]\) and \(s\!\!\upharpoonright _{\boldsymbol {y}}\in [\chi ]\). Conversely, if s_{ψ} ∈ [ψ] and s_{χ} ∈ [χ], then s := s_{ψ} ∪ s_{χ} is a model of φ. If s ∈ [φ] is optimal, i.e. hd(s, m) = OPT(φ, m), then
Conversely, if s_{ψ} ∈ [ψ] and s_{χ} ∈ [χ] are optimal solutions for their respective problems, then s := s_{ψ} ∪ s_{χ} satisfies
□
We can also show that introducing explicit equality constraints does not change the complexity of our problem. We need two introductory lemmas. The first one deals with equalities that do not interfere with the other atoms of the given formula.
Lemma 10
For constraint languages Γ,NSol(Γ ∪{≈}) andNSol^{d}(Γ ∪{≈}) reduce to particular cases of the respective problem, where for each constraintx ≈ yin the given formulaφat least one ofx, yoccurs also in some Γatomofφ.
Proof
Let (φ, m) be an instance of NSol(Γ ∪{≈}). Without loss of generality we assume φ to be of the form ψ ∧ ε, where ψ is a Γformula and ε is a {≈}formula. Let (V_{i})_{i∈I} be the unique finest partition of the variables in ε satisfying that variables x, y are in the same partition class if x ≈ y occurs in ε.
For each index i ∈ I we designate a specific variable x_{i} ∈ V_{i}. Let ψ^{′} be the formula obtained from ψ by substituting all occurrences of variables y ∈ V_{i} by x_{i}. Moreover, let I^{′} be the set of indices i ∈ I such that x_{i} actually occurs in ψ^{′}, and let \(I^{\prime \prime }:=I\smallsetminus I^{\prime }\) be the set of indices without this property. We set \(\varepsilon ^{\prime }:=\bigwedge _{i\in I^{\prime }}\varepsilon _{i}\) and \(\varepsilon ^{\prime \prime }:=\bigwedge _{i\in I^{\prime \prime }}\varepsilon _{i}\), where the formula \(\varepsilon _{i}:=\bigwedge _{y\in V_{i}}(x_{i} \approx y)\) expresses the equivalence of the variables in V_{i}. Note that the formulas ψ ∧ ε and χ := ψ^{′}∧ ε^{′}∧ ε^{″} contain the same variables and have identical sets of models.
Now consider the formula φ^{′} := ψ^{′}∧ ε^{′} and the assignment \(m^{\prime } := m\!\!\upharpoonright _{V^{\prime }}\), where V^{′} is the set of variables occurring in φ^{′}. The pair (φ^{′}, m^{′}) is an NSol(Γ ∪{≈})instance with the additional properties stated in the lemma. By construction we have χ = φ^{′}∧ ε^{″}, where the set V^{′} of variables in φ^{′} and the set V^{″} of variables in ε^{″} are disjoint. By Lemma 9 we obtain \(\text {OPT}(\varphi ,m) = \text {OPT}(\chi ,m) =\text {OPT}(\varphi ^{\prime },m^{\prime }) + \text {OPT}(\varepsilon ^{\prime \prime },m\!\!\upharpoonright _{V^{\prime \prime }})\).
An optimal solution \(s_{\varepsilon ^{\prime \prime }}\) of ε^{″} and the optimal value \(d:=\text {OPT}(\varepsilon ^{\prime \prime },m\!\!\!\upharpoonright _{V^{\prime \prime }})\) can obviously be computed in polynomial time. Therefore the instance (φ, m, k) of NSol^{d}(Γ ∪{≈}) corresponds to the instance (φ^{′}, m^{′}, k − d) of the restricted decision problem in the polynomialtime manyone reduction.
Moreover, if s^{′} is an rapproximate solution of (φ^{′}, m^{′}) for some r ≥ 1, then \(s:=s^{\prime }\cup s_{\varepsilon ^{\prime \prime }}\) is a solution of φ, and we have
so the constructed solution s of φ is also rapproximate. This concludes the proof of the APreduction with factor α = 1.
When dealing with NSol(Γ ∪{≈}), the previous lemma enables us to concentrate on instances where the formula φ has the form
where V_{1},…,V_{t} are disjoint sets of variables, being also disjoint from the variables of the Γformula ψ. For each 1 ≤ i ≤ t the given assignment m can have equal distance to the zero vector and the allones vector on the variables in V_{i} ∪{x_{i}}, or it can be closer to one of the constant vectors. It is convenient to group the equality constraints according to these three cases. The following lemma discusses how to remove those equality constraints, on whose variables m is not equidistant from 0 and 1.
Lemma 11
Let Γ be a constraint language and
be any Γformula containing precisely the distinct variables z_{1},…,z_{n}, x_{1},…,x_{α}, v_{1},…,v_{β} and w_{1},…,w_{γ}. Consider a formula
where \(I_{1}^{\prime },\dotsc ,I_{\alpha }^{\prime }\), \(J_{1}^{\prime },\dotsc ,J_{\beta }^{\prime }\) and \(K_{1}^{\prime },\dotsc ,K_{\gamma }^{\prime }\) are nonempty sets of variables that are pairwise disjoint and disjoint from the variables in ψ. For 1 ≤ a ≤ α, 1 ≤ b ≤ β and 1 ≤ c ≤ γ we put \(I_{a}:= I_{a}^{\prime } \cup \{x_{a}\}\), \(J_{b}:= J_{b}^{\prime } \cup \{v_{b}\}\) and \(K_{c} := K_{c}^{\prime }\cup \{w_{c}\}\). Moreover, let m be an assignment for φ, such that for 1 ≤ a ≤ α, 1 ≤ b ≤ β and 1 ≤ c ≤ γ
It is possible to construct a formula ψ^{′}, whose size is polynomial in the size of φ, and an assignment M for \(\varphi ^{\prime }:= \psi ^{\prime }\land \bigwedge _{a = 1}^{\alpha }\bigwedge _{x\in I_{a}^{\prime }} (x_{a}\approx x)\) such that the following holds

ψ, φ, φ^{′} and ψ^{′} are equisatisfiable;

if ψ is satisfiable, then OPT(φ, m) = OPT(φ^{′}, M) + d where \(d = \sum _{b = 1}^{\beta } d_{0,J_{b}} +\sum _{c = 1}^{\gamma } d_{1,K_{c}}\);

for every r ∈ [1,∞), one can produce an (rapproximate) solution of (φ, m) from any (r approximate) solution of (φ^{′}, M) in polynomial time.
Proof
First, we describe how to construct the formula ψ^{′}. In the following we use the abbreviations Z := {z_{1},…,z_{n},x_{1},…,x_{α}}, \(Z^{\prime }:=Z\cup \bigcup _{a = 1}^{\alpha } I_{a}\), V := {v_{1},…,v_{β}} and W := {w_{1},…,w_{γ}}. For every variable u ∈ Z ∪ V ∪ W define a set B(u) of variables as follows:
For each atom R(u_{1},…,u_{q}) of ψ define a set of atoms \(\{R(u_{1}^{\prime },\dotsc ,u_{q}^{\prime }){(u_{1}^{\prime },\dotsc ,u_{q}^{\prime })\in \prod _{i = 1}^{q} B(u_{i})}\}\), take the union over all these sets and define ψ^{′} as the conjunction of all its members, giving a formula over Z ∪ V^{′}∪ W^{′} where \(V^{\prime } =\bigcup _{u\in V} B(u)\) and \(W^{\prime } = \bigcup _{u\in W} B(u)\). Adding again the equality constraints, where m has equal distance from 0 and 1 we get \(\varphi ^{\prime } = \psi ^{\prime }\land \bigwedge _{a = 1}^{\alpha }\bigwedge _{x\in I_{a}^{\prime }} (x_{a}\approx x)\) over Z^{′}∪ V^{′}∪ W^{′}. This is a polynomial time construction since the arities of relations in Γ are bounded.
Moreover, we define an assignment M to the variables u of φ^{′} as follows:
Let S^{′} be a solution of (φ^{′}, M). If S^{′} is constant on B(u), for each u ∈ V ∪ W, then put S^{″} := S^{′}. Otherwise, by letting S^{″}(u) := S^{′}(u) for u ∈ Z^{′} and for u ∈ B(u^{′}) where u^{′}∈ V ∪ W is such that S^{′} is constant on B(u^{′}), and by defining S^{″}(u) := M(u) = 0 for the remaining variables u ∈ V^{′} and S^{″}(u) := M(u) = 1 for the remaining variables u ∈ W^{′}, we obtain a model S^{″} of φ^{′} satisfying hd(S^{″}, M) ≤hd(S^{′}, M) and being constant on B(u) for each u ∈ V ∪ W. From S^{″} we construct an assignment S of φ by defining S(u) := S^{″}(u) for u ∈ Z^{′}, \(S(u):=S^{\prime \prime }({v_{b}^{1}})\) for u ∈ J_{b} and 1 ≤ b ≤ β, and \(S(u):=S^{\prime \prime }({w_{c}^{1}})\) for u ∈ K_{c} and 1 ≤ c ≤ γ. It satisfies φ as e_{b},f_{c} > 0 for 1 ≤ b ≤ β and 1 ≤ c ≤ γ. From these definitions, it follows
because S^{″} is constant on B(u) for u ∈ V ∪ W and B(v_{b}) = e_{b},B(w_{c}) = f_{c} for 1 ≤ b ≤ β and 1 ≤ c ≤ γ; and
Consequently, hd(S, m) = hd(S^{″}, M) + d, where \(d=\sum _{b = 1}^{\beta } d_{0,J_{b}} + \sum _{c = 1}^{\gamma } d_{1,K_{c}}\).
Using this, we shall prove below that OPT(φ^{′}, M) + d = OPT(φ, m). Thus, if S^{′} now takes the role of an rapproximate solution of (φ^{′}, M) for some r ≥ 1, then it follows that
Let subsequently S^{′} be such that OPT(φ^{′}, M) = hd(S^{′}, M), and let s be a model of φ. Construct a model s^{′} of φ^{′} by putting s^{′}(u) := s(u) for u ∈ Z^{′} and s^{′}(u) := s(u^{′}) for u ∈ B(u^{′}) and u^{′}∈ V ∪ W. As above we get hd(s, m) = hd(s^{′}, M) + d because the definitions imply
By minimality of S^{′}, we obtain hd(S^{″}, M) ≤hd(S^{′}, M) ≤hd(s^{′}, M). If we additionally require that s be an optimal solution of (φ, m), then hd(s^{′}, M) = hd(s, m) − d ≤hd(S, m) − d = hd(S^{″}, M). Thus, the distances hd(S^{″}, M), hd(S^{′}, M) and hd(s^{′}, M) coincide, which implies the desired equality OPT(φ, m) = hd(s, m) = hd(s^{′}, M) + d = hd(S^{′}, M) + d = OPT(φ^{′}, M) + d.
The previous lemma, in fact, describes an APreduction from the specialized version of the problem NSol(Γ ∪{≈}) discussed in Lemma 10 to an even more specialized variant (the analogous statement is true for the decision version—instances (φ, m, k) can be decided by considering (φ^{′}, M, k − d) instead): namely all equality constraints touch variables in Γatoms and the given assignment has equal distance from the constant tuples on each variable block connected by equalities. In the next result we show how to remove also these equality constraints.
Proposition 12
For constraint languages Γ, we haveNSol^{d}(Γ) ≡_{m}NSol^{d}(Γ ∪{≈}) andNSol(Γ) ≡_{AP}NSol(Γ ∪{≈}).
Proof
The reduction from left to right is trivial. For the other direction, consider first an instance of NSol^{d}(Γ ∪{≈}) with formula φ, assignment m, and bound k. Applying the reductions indicated in Lemmas 10 and 11, we can assume (also for NSol(Γ ∪{≈})) that φ is of the form \(\psi \land \bigwedge _{a = 1}^{\alpha } \bigwedge _{x\in I^{\prime }_{a}} (x_{a} \approx x)\) with a Γformula ψ containing the distinct variables z_{1},…,z_{n},x_{1},…,x_{α} (n ≥ 0, α ≥ 1) and nonempty disjoint (from each other and from ψ) variable sets \(I^{\prime }_{a}\) for 1 ≤ a ≤ α. Moreover, we can suppose that \(\text {hd}(m\!\upharpoonright _{I_{a}},\boldsymbol {0}) = \text {hd}(m \upharpoonright _{I_{a}},\boldsymbol {1}) =:c_{a}\) for all 1 ≤ a ≤ α, where I_{a} denotes the set \(I^{\prime }_{a}\cup \{x_{a}\}\).
We define \(c := \sum _{a = 1}^{\alpha } c_{a}\), and we choose some ℓelement index set I such that α/ℓ < 1, that is, ℓ ≥ α + 1 (we shall place another condition on ℓ at the end). We construct a formula φ^{′} as follows: For each atom R(u_{1},…,u_{q}) of ψ we introduce the set \(\{R(u_{1}^{i_{1}},\dotsc ,u_{q}^{i_{q}}){(i_{1},\dotsc ,i_{q})\in I^{q}}\}\) of atoms where for 1 ≤ ν ≤ q and i ∈ I we let \(u_{\nu }^{i} := z_{j,i}\) if u_{ν} = z_{j} for some 1 ≤ j ≤ n and \(u_{\nu }^{i} = u_{\nu }\) if else u_{ν} ∈{x_{1},…,x_{α}}. Take the union over all these sets and let φ^{′} be the conjunction of all atoms in this union. This construction can be carried out in polynomial time since there is a bound on the arities of relations in Γ. Define an assignment M by M(x_{a}) := m(x_{a}) for 1 ≤ a ≤ α and M(z_{j, i}) := m(z_{j}) for 1 ≤ j ≤ n and i ∈ I. We claim that existence of solutions for (φ, m, k) can be decided by checking for solutions of (φ^{′}, M, ℓ(k − c) + α). The argument is similar to that of Proposition 7: ψ is (un)satisfiable if and only if φ and φ^{′} are, so we have a correct answer in the unsatisfiable case. Otherwise, consider a solution s to (φ, m, k). Letting Z := {z_{1},…,z_{n}}, we have
i.e. HCode \(\text {hd}(s\!\!\upharpoonright _{Z},m\!\!\upharpoonright _{Z})\leq kc\). Putting s^{′}(x_{a}) := s(x_{a}) for 1 ≤ a ≤ α and s^{′}(z_{j, i}) := s(z_{j}) for 1 ≤ j ≤ n and i ∈ I we get a model of φ^{′}, and it follows that \(\text {hd}(s^{\prime }\!\!\upharpoonright _{Z^{\prime }},\)\(M\!\!\upharpoonright _{Z^{\prime }}) =\ell \cdot \text {hd}(s\!\!\upharpoonright _{Z},m\!\!\upharpoonright _{Z})\leq \ell \cdot (kc)\), where Z^{′} := {z_{j, i}∣1 ≤ j ≤ n, i ∈ I}. Therefore, abbreviating X := {x_{1},…,x_{α}}, we obtain \(\text {hd}(s^{\prime },M) = \text {hd}(s^{\prime \!}\!\upharpoonright _{Z^{\prime }},M\!\!\upharpoonright _{Z^{\prime }})\)\( + \text {hd}(s^{\prime }\!\!\upharpoonright _{X},M\!\!\upharpoonright _{X}) \leq \ell \cdot (kc)+\alpha \).
Conversely, let S^{′} be a solution of (φ^{′}, M, ℓ(k − c) + α). As in Proposition 7 we can construct a solution S^{″} being constant on {z_{j, i}∣i ∈ I} for each 1 ≤ j ≤ n. Letting S(x) := S^{″}(x_{a}) for x ∈ I_{a} and 1 ≤ a ≤ α and S(z_{j}) := S^{″}(z_{j, i}) for some fixed index i ∈ I and all 1 ≤ j ≤ n, one obtains a model of φ. If S(z_{j})≠m(z_{j}) for some 1 ≤ j ≤ n, then we have S^{″}(z_{j, i}) = S(z_{j})≠m(z_{j}) = M(z_{j, i}) for all i ∈ I. Hence, we have \( \ell \cdot \text {hd}(S\!\!\upharpoonright _{Z},m\!\!\upharpoonright _{Z}) \leq \text {hd}(S^{\prime \prime }\!\!\upharpoonright _{Z^{\prime }},M\!\!\upharpoonright _{Z^{\prime }}) \leq \text {hd}(S^{\prime \prime },M) \leq \text {hd}(S^{\prime },M) \). Division by ℓ implies \(\text {h\hspace *{.23pt}d\hspace *{.23pt}}(S\!\!\upharpoonright _{Z},m\!\!\upharpoonright _{Z})\leq \text {hd}(S^{\prime },M)/\ell \leq kc + \alpha /\ell < kc + 1\), i.e. \(\text {h\hspace *{.23pt}d}(S\!\!\upharpoonright _{Z},\)\(m\!\!\upharpoonright _{Z})\leq kc\). From this we finally infer that \(\text {hd}(S,m) = \text {hd}(S\!\upharpoonright _{Z},m\!\!\upharpoonright _{Z}) + c \leq k\).
Suppose now that S^{′} is an rapproximate solution for (φ^{′}, M) for some r ≥ 1, i.e. we have hd(S^{′}, M) ≤ r OPT(φ^{′}, M). Constructing a model S of φ as before, we obtain \(\ell \text { hd}(S\!\!\upharpoonright _{Z},m\!\!\upharpoonright _{Z})\leq \text {hd}(S^{\prime },M)\leq r\text { OPT}(\varphi ^{\prime },M)\). Furthermore, from an optimal solution of φ, we get a model s^{′} of φ^{′} satisfying
Multiplying this inequality by r, combining it with previous inequalities and dividing by ℓ we thus have \(\text {hd}(S\!\!\!\!\upharpoonright _{Z},m\!\!\!\!\upharpoonright _{Z})\leq r\text { OPT}(\varphi ,m)rc +r\alpha /\ell \). Note that OPT(φ, m) > 0, because if OPT(φ, m) = 0, then we would have a unique optimal model of φ, namely m. Then \(m\!\!\upharpoonright _{I_{1}}\) would have to be constant, implying \(\text {hd}(m\!\!\upharpoonright _{I_{1}},\boldsymbol {0}) \neq \text {hd}(m\!\!\upharpoonright _{I_{1}},\boldsymbol {1})\), as one distance would be zero and the other one I_{1} > 0. Therefore, for ℓ ∈Ω (φ^{2}) we have \(\text {hd}(S,m) = \text {hd}(S\!\!\upharpoonright _{Z},m\!\!\!\upharpoonright _{Z}) + c \leq \text {hd}(S\!\!\!\upharpoonright _{Z},m\!\!\upharpoonright _{Z}) + rc \leq r\text { OPT}(\varphi ,m)+r\alpha /\ell \leq \text {OPT}(\varphi ,m)(r + r\alpha /\ell ) = \text {OPT}(\varphi ,m)(r + \mathrm {o}(1))\). This demonstrates an APreduction with factor 1.
Propositions 7 and 12 allow us to switch freely between formulas with quantifiers and equality and those without. Hence we may derive upper bounds in the setting without quantifiers and equality while using the latter in hardness reductions. In particular, we can use ppdefinability when implementing a constraint language Γ by another constraint language Γ^{′}. Hence it suffices to consider Post’s lattice of coclones to characterize the complexity of NSol(Γ) for every finite constraint language Γ.
Corollary 13
For constraint languages Γ and Γ^{′}, for which the inclusion Γ^{′}⊆〈Γ〉 holds, we have the reductionsNSol^{d}(Γ^{′}) ≤_{m}NSol^{d}(Γ) andNSol(Γ^{′}) ≤_{AP}NSol(Γ). Thus, if 〈Γ^{′}〉 = 〈Γ〉 is satisfied, then the equivalencesNSol^{d}(Γ) ≡_{m}NSol^{d}(Γ^{′}) andNSol(Γ) ≡_{AP}NSol(Γ^{′}) hold.
Next we prove that, in certain cases, unit clauses in the formula do not change the complexity of NSol.
Proposition 14
Let Γ be a constraint language such that feasible solutions ofNSol(Γ) can be found in polynomial time. Then we haveNSol(Γ) ≡_{AP}NSol(Γ ∪{[x],[¬x]}).
Proof
The direction from left to right is obvious. For the other direction, we give an APreduction from NSol(Γ ∪{[x],[¬x]}) to NSol(Γ ∪{≈}). The latter is APequivalent to NSol(Γ) by Proposition 12.
The idea of the construction is to introduce two sets of variables \(y_{1}, \ldots , y_{n^{2}}\) and \(z_{1}, \ldots , z_{n^{2}}\) such that in any feasible solution all y_{i} and all z_{i} take the same value. By setting m(y_{i}) = 1 and m(z_{i}) = 0 for each i, any feasible solution m^{′} of small Hamming distance to m will have \(m^{\prime }(y_{i})= 1\) and m^{′}(z_{i}) = 0 for all i as well, because deviating from this would be prohibitively expensive. Finally, we simulate the unary relations x and ¬x by x ≈ y_{1} and x ≈ z_{1}, respectively. We now describe the reduction formally.
Consider a formula φ over Γ ∪{[x],[¬x]} with the variables x_{1},…,x_{n} and an assignment m. If (φ, m) fails to have feasible solutions, i.e., if φ is unsatisfiable, we can detect this in polynomial time by the assumption of the lemma and return the generic unsatisfiable instance ⊥. Otherwise, we construct a (Γ ∪{≈})formula φ^{′} over the variables x_{1},…x_{n}, \(y_{1}, \ldots , y_{n^{2}}\), \(z_{1}, \ldots , z_{n^{2}}\) and an assignment m^{′}. We obtain φ^{′} from φ by replacing every occurrence of a constraint [x] by x ≈ y_{1} and every occurrence of [¬x] by x ≈ z_{1}. Finally, we add the atoms y_{i} ≈ y_{1} and z_{i} ≈ z_{1} for all i ∈{2,…,n^{2}}. Let m^{′} be the assignment of the variables of φ^{′} given by \(m^{\prime }(x_{i}) = m(x_{i})\) for each i ∈{1,…,n}, and \(m^{\prime }(y_{i})= 1\) and m^{′}(z_{i}) = 0 for all i ∈{1,…,n^{2}}. To any feasible solution m^{″} of φ^{′} we assign \(g(\varphi , m, m^{\prime \prime })\) as follows.

1.
If φ is satisfied by m, we define g(φ, m, m^{″}) to be equal to m.

2.
Else if m^{″}(y_{i}) = 0 holds for all i ∈{1,…,n^{2}} or m^{″}(z_{i}) = 1 for all i ∈{1,…,n^{2}}, we define g(φ, m, m^{″}) to be any satisfying assignment of φ.

3.
Otherwise, we have m^{″}(y_{i}) = 1 and m^{″}(z_{i}) = 0 for all i ∈{1,…,n^{2}}. In this case we define g(φ, m, m^{″}) to be the restriction of m^{″} onto x_{1},…,x_{n}.
Observe that all variables y_{i} and all z_{i} are forced to take the same value in any feasible solution, respectively, so g(φ, m, m^{″}) is always welldefined. The construction is an APreduction. Assume that m^{″} is an rapproximate solution. We will show that g(φ, m, m^{″}) is also an rapproximate solution.
Case 1g(φ, m, m^{″}) computes the optimal solution, so there is nothing to show.
Case 2 Observe first that φ has a solution because otherwise it would have been mapped to ⊥ and m^{″} would not exist. Thus, g(φ, m, m^{″}) is welldefined and feasible by construction. Observe that m^{′} and m^{″} disagree on all y_{i} or on all z_{i}, so we have hd(m^{′}, m^{″}) ≥ n^{2}. Moreover, since φ has a feasible solution, it follows that OPT(φ^{′}, m^{′}) ≤ n. Since m^{″} is an rapproximate solution, we have that
If OPT(φ^{′}, m^{′}) = 0, then m^{′} would have to be a model of φ^{′}, and so would be its restriction to the x_{i}, i.e. m, a model of φ. This is handled in the first case, which is disjoint from the current one; hence, we infer n ≤ r. Consequently, the distance hd(m, g(φ, m, m^{″})) is bounded above by n ≤ r ≤ r ⋅OPT(φ, m), where the last inequality holds because φ is not satisfied by m and thus the distance of any optimal solution from m is at least 1.
Case 3 The variables x_{i}, for which [x_{i}] is a constraint, all satisfy g(φ, m, m^{″})(x_{i}) = 1 by construction. Moreover, we have g(φ, m, m^{″})(x_{i}) = 0 for all x_{i} for which [¬x_{i}] is a constraint of φ. Consequently, g(φ, m, m^{″}) is feasible. Again, OPT(φ^{′}, m^{′}) ≤ n, so any optimal solution to (φ^{′}, m^{′}) must set all variables y_{i} to 1 and all z_{i} to 0. It follows that OPT(φ, m) = OPT(φ^{′}, m^{′}). Thus we get
which completes the proof.
Inapplicability of Clone Closure
Corollary 13 shows that the complexity of NSol is not affected by existential quantification by giving an explicit reduction from NSol_{pp} to NSol. It does not seem possible to prove the same for NOSol and MSD. However, similar results hold for the conjunctive closure; thus we resort to minimal or irredundant weak bases of coclones instead of usual bases.
Proposition 15
Let Γ and Γ^{′}be constraint languages. If Γ^{′}⊆〈Γ〉_{∧}holds then we have the reductionsNOSol^{d}(Γ^{′}) ≤_{m}NOSol^{d}(Γ) andNOSol(Γ^{′}) ≤_{AP}NOSol(Γ), as well asMSD^{d}(Γ^{′}) ≤_{m}MSD^{d}(Γ) andMSD(Γ^{′}) ≤_{AP}MSD(Γ).
Proof
We prove only the part that Γ^{′}⊆〈Γ〉_{∧} implies NOSol(Γ^{′}) ≤_{AP}NOSol(Γ). The other results will be clear from that reduction since the proof is generic and therefore holds for both NOSol and MSD, as well as for their decision variants.
Let a Γ^{′}formula φ be an instance of NOSol(Γ^{′}). Since Γ^{′}⊆〈Γ〉_{∧}, every constraint R(x_{1},…,x_{k}) of φ can be written as a conjunction of constraints over relations from Γ. Substitute the latter into φ, obtaining φ^{′}. Now φ^{′} is an instance of NOSol(Γ), where φ^{′} is only polynomially larger than φ. As φ and φ^{′} have the same variables and hence the same models, also the closest distinct models of φ and φ^{′} are the same.
Duality
Given a relation R ⊆{0,1}^{n}, its dual relation is \(\text {dual}(R) = \{\overline {m}\mid {m \in R}\}\), i.e., the relation containing the complements of tuples from R. Duality naturally extends to sets of relations and coclones. We define dual(Γ) = {dual(R)∣R ∈Γ} as the set of dual relations to Γ. Since taking complements is involutive, duality is a symmetric relation. If a relation R^{′} (a set of relations Γ^{′}) is a dual relation to R (a set of dual relations to Γ), then R (Γ) is also dual to R^{′} (to Γ^{′}). By a simple inspection of the bases of coclones in Table 2, we can easily see that many coclones are dual to each other. For instance iE_{2} is dual to iV_{2}. The following proposition shows that it is sufficient to consider only one half of Post’s lattice of coclones.
Proposition 16
For any constraint language Γ we have
as well as
Proof
Let φ be a Γformula and m an assignment to φ. We construct a dual(Γ)formula φ^{′} by substitution of every atom R(x) by dual(R)(x). The assignment m satisfies φ if and only if \(\overline {m}\) satisfies φ^{′}, where \(\overline {m}\) is the pointwise complement of m. Moreover, \(\text {hd}(m, m^{\prime }) = \text {hd}(\overline {m}, \overline {m}^{\prime })\).
Finding the Nearest Solution
This section contains the proof of Theorem 3. We first consider the polynomialtime cases followed by the cases of higher complexity.
PolynomialTime Cases
Proposition 17
If a constraint language Γ is both bijunctive and affine (Γ ⊆iD_{1}), thenNSol(Γ) can be solved in polynomial time.
Proof
Since Γ ⊆iD_{1} = 〈Γ^{′}〉 with Γ^{′} := {[x ⊕ y],[x]}, we have NSol(Γ) ≤_{AP}NSol(Γ^{′}) by Corollary 13. Every Γ^{′}formula φ is equivalent to a linear system of equations over the Boolean ring \(\mathbb {Z}_{2}\) of type x ⊕ y = 1 and x = 1. Substitute the fixed values x = 1 into the equations of the type x ⊕ y = 1 and propagate. If a contradiction is found thereby, reject the input. After an exhaustive application of this rule only equations of the form x ⊕ y = 1 remain. For each of them put an edge {x, y} into E, defining an undirected graph G = (V, E), whose vertices V are the unassigned variables. If G is not bipartite, then φ has no solutions, so we can reject the input. Otherwise, compute a bipartition \(V = L \dot \cup R\). We assume that G is connected; if not perform the following algorithm for each connected component (cf. Lemma 9). Assign the value 0 to each variable in L and the value 1 to each variable in R, giving the satisfying assignment m_{1}. Swapping the roles of 0 and 1 w.r.t. L and R we get a model m_{2}. Return min{hd(m, m_{1}),hd(m, m_{2})}.
Proposition 18
If a constraint language Γ is monotone (Γ ⊆iM_{2}),then the problemNSol(Γ) can be solved in polynomial time.
Proof
We have iM_{2} = 〈Γ^{′}〉 where Γ^{′} := {[x → y],[¬x],[x]}. Thus Corollary 13 and Γ ⊆〈Γ^{′}〉 imply NSol(Γ) ≤_{AP}NSol(Γ^{′}). The relations [¬x] and [x] determine a unique value for the respective variable, therefore we can eliminate unit clauses and propagate the values. If a contradiction occurs, we reject the input. It thus remains to consider formulas φ containing only binary implicative clauses of type x → y.
Let V be the set of variables in φ, and for i ∈{0,1} let V_{i} = {x ∈ V ∣m(x) = i} be the variables mapped to value i by assignment m. We transform the formula φ to a linear programming problem as follows. For each clause x → y we add the inequality y ≥ x, and for each variable x ∈ V we add the constraints x ≥ 0 and x ≤ 1. As linear objective function we use \(f(\boldsymbol {x}) = \sum _{x \in V_{0}} x + \sum _{x \in V_{1}} (1  x)\). For an arbitrary solution m^{′}, it returns the number of variables that change their parity between m and m^{′}, i.e., f(m^{′}) = hd(m, m^{′}). This way we obtain the (integer) linear programming problem (f, Ax ≥b), where A is a totally unimodular matrix and b is an integral column vector.
The rows of A consist of the lefthand sides of inequalities y − x ≥ 0, x ≥ 0, and − x ≥− 1, which constitute the system Ax ≥b. Every entry in A is 0, + 1, or − 1. Every row of A has at most two nonzero entries. For the rows with two entries, one entry is + 1, the other is − 1. According to Condition (iv) in Theorem 19.3 in [30], this is a sufficient condition for A being totally unimodular. As A is totally unimodular and b is an integral vector, f has integral minimum points, and one of them can be computed in polynomial time (see e.g. [30, Chapter 19]).
Hard Cases
We start off with an easy corollary of Schaefer’s dichotomy.
Proposition 19
Let Γ be a finite set of Boolean relations. IfiN_{2} ⊆〈Γ〉, then it isNPcomplete to decide whether a feasible solution exists forNSol(Γ);otherwise,NSol(Γ) ∈polyAPX.
Proof
If iN_{2} ⊆〈Γ〉 holds, checking the existence of feasible solutions for NSol(Γ)instances is NPhard by Schaefer’s theorem [27].
Let (φ, m) be an instance of NSol(Γ). We give an napproximate algorithm for the other cases, where n denotes the number of variables in φ. If m satisfies φ, return m. Otherwise compute an arbitrary solution m^{′} of φ, which can be done in polynomial time by Schaefer’s theorem. This algorithm is napproximate: If m satisfies φ, the algorithm returns the optimal solution; otherwise we have OPT(φ, m) ≥ 1 and hd(m, m^{′}) ≤ n, hence the answer m^{′} of the algorithm is napproximate.
APXComplete Cases
We start with reductions from the optimization version of vertex cover. Since the relation [x ∨ y] is a straightforward Boolean encoding of vertex cover, we immediately get the following result.
Proposition 20
NSol(Γ) is APXhard for every constraint language Γ satisfying\({\text {iS}_{0}^{2}} \subseteq \langle {\Gamma }\rangle \)or\({\text {iS}_{1}^{2}} \subseteq \langle {\Gamma }\rangle \).
Proof
We have \({\text {iS}_{0}^{2}} = \langle {\{[x\lor y]\}}\rangle \) and \({\text {iS}_{1}^{2}} = \langle {\{[\neg x \lor \neg y]\}}\rangle \). We discuss the former case, the latter one being symmetric and provable from the first one by Proposition 16.
We encode VertexCover into NSol({[x ∨ y]}). For each edge {x, y}∈ E of a graph G = (V, E) we add the clause (x ∨ y) to the formula φ_{G}. Every model m^{′} of φ_{G} yields a vertex cover {v ∈ V ∣m^{′}(v) = 1}, and conversely, the characteristic function of any vertex cover satisfies φ_{G}. Moreover, we choose m = 0. Then hd(0, m^{′}) is minimal if and only if the number of 1s in m^{′} is minimal, i.e., if m^{′} is a minimal model of φ_{G}, i.e., if m^{′} represents a minimal vertex cover of G. Since VertexCover is APXcomplete (see e.g. [3]) and NSol({[x ∨ y]}) ≤_{AP}NSol(Γ) (see Corollary 13), the result follows.
Proposition 21
We haveNSol(Γ) ∈APX for constraint languages Γ ⊆iD_{2}.
Proof
Γ^{′} := {[x ⊕ y],[x → y]} is a base of iD_{2}. By Corollary 13 it suffices to show that NSol(Γ^{′}) is in APX. Let (φ, m) be an instance of this problem. Feasibility for φ can be encoded as an integer program as follows: Every constraint x ⊕ y induces an equation x + y = 1, every constraint x → y an inequality x ≤ y. If we restrict all variables to {0,1} by the appropriate inequalities, it is clear that an assignment m^{′} satisfies φ if it satisfies the linear system with inequality side conditions. As objective function we use \(f(\boldsymbol {x}):=\sum _{x\in V_{0}} x + \sum _{x\in V_{1}} (1x)\), where V_{i} is the set of variables mapped to i by m. Clearly, for every solution m^{′} we have f(m^{′}) = hd(m, m^{′}). The 2approximation algorithm from [17] for integer linear programs, where every inequality contains at most two variables, completes the proof.
Proposition 22
We haveNSol(Γ) ∈APX for constraint languages\({\Gamma }\subseteq \text {iS}_{00}^{\ell }\)withℓ ≥ 2.
Proof
Γ^{′} := {[x_{1} ∨⋯ ∨ x_{ℓ}],[x → y],[¬x],[x]} is a base of \(\text {iS}_{00}^{\ell }\). By Corollary 13 it suffices to show that NSol(Γ^{′}) is in APX. Let (φ, m) be an instance of this problem. We use an approach similar to the one for the corresponding case in [22], again writing φ as an integer program. We write constraints x_{1} ∨⋯ ∨ x_{ℓ} as inequalities x_{1} + ⋯ + x_{ℓ} ≥ 1, constraints x → y as x ≤ y, ¬x as x = 0, and x as x = 1. Moreover, we add x ≥ 0 and x ≤ 1 for each variable x. It is easy to check that the feasible Boolean solutions of φ and of the linear system coincide. As objective function we use \(f(\boldsymbol {x}):=\sum _{x\in V_{0}} x + \sum _{x\in V_{1}} (1x)\), where V_{i} is the set of variables mapped to i by m. Clearly, for every solution m^{′} we have f(m^{′}) = hd(m, m^{′}). Therefore it suffices to approximate the optimal solution for the integer linear program.
To this end, let m^{″} be a (generally noninteger) solution to the relaxation of the linear program, which can be computed in polynomial time. We construct m^{′} by setting m^{′}(x) = 0 if m^{″}(x) < 1/ℓ and m^{′}(x) = 1 if m^{″}(x) ≥ 1/ℓ. As ℓ ≥ 2, we get hd(m, m^{′}) = f(m^{′}) ≤ ℓf(m^{″}) ≤ ℓ ⋅OPT(φ, m). It is easy to check that m^{′} is a feasible solution, which completes the proof.
NearestCodewordComplete Cases
This section essentially uses the facts that MinOnes is NearestCodewordcomplete for the coclone iL_{2} and that it is a special case of NSol. The following result was stated by Khanna et al. for completeness via Areductions [22, Theorem 2.14]. A closer look at the proof reveals that it also holds for the stricter notion of completeness via APreductions that we use. In this respect the proofs of Propositions 23 and 27 spell out the missing details from [8, Propositions 15 and 18].
Proposition 23
MinOnes(Γ) isNearestCodewordcomplete by APreductionsfor constraint languages Γ satisfying 〈Γ〉 = iL_{2}.
Proof
According to [22, Lemma 8.13], MinOnes(Γ) is NearestCodewordhard for iL ⊆〈Γ〉. This proof uses APreductions, i.e., NearestCodeword ≤_{AP}MinOnes(Γ).
Regarding the other direction, MinOnes(Γ) ≤_{AP}NearestCodeword, we first observe that \(\text {odd}^{3}=\{(a_{1},a_{2},a_{3})\in \{0,1\}^{3}\mid \sum _i a_i~\text {odd}\}\) and \(\text {even}^{3}=\{(a_{1},a_{2},a_{3})\in \{0,1\}^{3}\mid \sum _i a_i~\text {even}\}\) perfectly implement every constraint in iL_{2}, i.e., 〈{odd^{3},even^{3}}〉 = iL_{2} as shown in [22, Lemma 7.6]. Therefore, for Γ ⊆iL_{2}, the problem WeightedMinOnes(Γ) APreduces to WeightedMinOnes({odd^{3},even^{3}}) [22, Lemma 3.9]. The latter problem APreduces to WeightedMinCSP({odd^{3},even^{3},[¬x]}) [22, Lemma 8.1], which further APreduces to WeightedMinCSP({odd^{3},even^{3}}) because of [¬x] = [even^{3}(x, x, x)]. In total we thus have that the problem WeightedMinOnes(Γ) APreduces to the problem WeightedMinCSP({odd^{3},even^{3}}). We conclude by observing that MinOnes is a particular case of the WeightedMinOnes problem and that NearestCodeword is the same as WeightedMinCSP({odd^{3},even^{3}}), yielding MinOnes(Γ) ≤_{AP}NearestCodeword.
Lemma 24
We haveMinOnes(Γ) ≤_{AP}NSol(Γ) for any constraint language Γ.
Proof
MinOnes(Γ) is a special case of NSol(Γ) where m is the 0assignment.
Corollary 25
NSol(Γ) isNearestCodewordhard for constraint languages Γ satisfying iL ⊆〈Γ〉.
Proof
Γ^{′} := {even^{4},[x],[¬x]} is a base of iL_{2}. By Proposition 23, MinOnes(Γ^{′}) is NearestCodewordcomplete. By Lemma 24, MinOnes(Γ^{′}) reduces to NSol(Γ^{′}). By Proposition 14, NSol(Γ^{′}) is APequivalent to NSol({even^{4}}). Finally, because of even^{4} ∈iL ⊆〈Γ〉 and Corollary 13, NSol({even^{4}}) reduces to NSol(Γ).
Proposition 26
We haveNSol(Γ) ≤_{AP}MinOnes({even^{4},[¬x],[x]}) for constraint languages Γ ⊆iL_{2}.
Proof
Γ^{′} := {even^{4},[¬x],[x]} is a base of iL_{2}. By Corollary 13 it suffices to show NSol(Γ^{′}) ≤_{AP}MinOnes(Γ^{′}).
We proceed by reducing NSol(Γ^{′}) to a subproblem of NSol_{pp}(Γ^{′}), where only instances (φ, 0) are considered. Then, using Proposition 7 and Remark 8, this reduces to a subproblem of NSol(Γ^{′}) with the same restriction on the assignments, which is exactly MinOnes(Γ^{′}). Note that [x ⊕ y] is equal to [∃z∃z^{′}(even^{4}(x, y, z, z^{′}) ∧¬z ∧ z^{′})] so we can freely use [x ⊕ y] in any Γ^{′}formula. Let formula φ and assignment m be an instance of NSol(Γ^{′}). We copy all clauses of φ to φ^{′}. For each variable x of φ for which m(x) = 1, we take a new variable x^{′} and add the constraint x ⊕ x^{′} to φ^{′}. Moreover, we existentially quantify x. Clearly, there is a bijection I between the satisfying assignments of φ and those of φ^{′}: For every solution s of φ we get a solution I(s) of φ^{′} by setting for each x^{′} introduced in the construction of φ^{′} the value I(s)(x^{′}) to the complement of s(x). Moreover, we have that hd(m, s) = hd(0, I(s)). This yields a trivial APreduction with α = 1.
MinHornDeletionComplete Cases
Proposition 27 (Khanna et al. [22])
We haveMinHornDeletioncompletenessfor the problemsMinOnes({x ∨ y ∨¬z, x,¬x}) andWeightedMinOnes({x ∨ y ∨¬z, x ∨ y}) via APreductions.
Proof
These results are stated in [22, Theorem 2.14] for completeness via Areductions. The actual proof in [22, Lemma 8.7 and Lemma 8.14], however, uses APreductions, hence the results also hold for our stricter notion of completeness.
Lemma 28
NSol({x ∨ y ∨¬z}) ≤_{AP}WeightedMinOnes({x ∨ y ∨¬z, x ∨ y}).
Proof
Let formula φ and assignment m be an instance of NSol({x ∨ y ∨¬z}) over the variables \(x_{1}, \ldots ,x_{n}\). Let V_{1} be the set of variables x_{i} with m(x_{i}) = 1. We construct a {x ∨ y ∨¬z, x ∨ y}formula φ^{′} by adding to φ for each x_{i} ∈ V_{1} the constraint \(x_{i}\lor x_{i}^{\prime }\) where \(x_{i}^{\prime }\) is a new variable. We set the weights of the variables of φ^{′} as follows. For x_{i} ∈ V_{1} we set w(x_{i}) = 0, all other variables get weight 1. To each satisfying assignment m^{′} of φ^{′} we construct the assignment m^{″} which is the restriction of m^{′} to the variables of φ. This construction is an APreduction.
Note that m^{″} is feasible if m^{′} is. Let m^{′} be an rapproximation of OPT(φ^{′}). Note that whenever for x_{i} ∈ V_{1} we have m^{′}(x_{i}) = 0 then \(m^{\prime }(x_{i}^{\prime })= 1\). The other way round, we may assume that whenever m^{′}(x_{i}) = 1 for x_{i} ∈ V_{1} then \(m^{\prime }(x_{i}^{\prime }) = 0\). If this is not the case, then we can change m^{′} accordingly, decreasing the weight that way. It follows that w(m^{′}) = n_{0} + n_{1} where we have
which means that w(m^{′}) equals hd(m, m^{″}). Analogously, any model s ∈ [φ] can be extended to a model m^{′}∈ [φ^{′}] by putting \(m^{\prime }(x_{i}^{\prime }) = 1\) if x_{i} ∈ V_{1} and s(x_{i}) = 0, and \(m^{\prime }(x_{i}^{\prime }) = 0\) for the remaining x_{i} ∈ V_{1}; thereby w(m^{′}) = hd(m, s). Consequently, the optima in both problems correspond, that is, we get OPT(φ^{′}) = OPT(φ, m). Hence we deduce hd(m, m^{″}) = w(m^{′}) ≤ r OPT(φ^{′}) = r OPT(φ, m).
Proposition 29
For every dual Horn constraint language Γ ⊆iV_{2}we have the reductionNSol(Γ) ≤_{AP}WeightedMinOnes({x ∨ y ∨¬z, x ∨ y}).
Proof
Since {x ∨ y ∨¬z, x,¬x} is a base of iV_{2}, by Corollary 13 it suffices to prove the reduction NSol({x ∨ y ∨¬z, x,¬x}) ≤_{AP}WeightedMinOnes({x ∨ y ∨¬z, x ∨ y}). To this end, first reduce NSol({x ∨ y ∨¬z, x,¬x}) to NSol(x ∨ y ∨¬z) by Proposition 14 and then use Lemma 28.
Proposition 30
NSol(Γ) isMinHornDeletionhardfor finite Γ with iV_{2} ⊆〈Γ〉.
Proof
For Γ^{′} := {x ∨ y ∨¬z, x,¬x} we have MinHornDeletion ≡_{AP}MinOnes(Γ^{′}) by Proposition 27. Now it follows MinOnes(Γ^{′}) ≤_{AP}NSol(Γ^{′}) ≤_{AP}NSol(Γ) using Lemma 24 and Corollary 13 on the assumption Γ^{′} ⊆iV_{2} ⊆〈Γ〉.
PolyAPXHardness
Proposition 31
The problemNSol(Γ) is polyAPXhard for constraint languages Γ satisfying iN ⊆〈Γ〉⊆iI_{0}or iN ⊆〈Γ〉⊆iI_{1}.
Proof
The constraint language Γ_{1} := {even^{4},[x → y],[x]} is a base of iI_{1}. MinOnes(Γ_{1}) is polyAPXhard by Theorem 2.14 of [22] and reduces to NSol(Γ_{1}) by Lemma 24. Since [x → y] = [dup^{3}(x, y,1)] = [∃z(dup^{3}(x, y, z) ∧ z)], as well as 〈{even^{4}}〉 = iL, 〈{dup^{3}}〉 = iN, and iL ⊆iN, we have the reductions
by Corollary 13. The problem of finding feasible solutions of NSol(Γ), where iN ⊆〈Γ〉⊆iI_{0} or iN ⊆〈Γ〉⊆iI_{1}, is polynomialtime decidable. Indeed, such Γ is 0 or 1valid, therefore the allzero or allone tuple is always guaranteed to exist as a feasible solution. Therefore Proposition 14 implies NSol({dup^{3}, x}) ≡_{AP}NSol({dup^{3}}); the latter problem reduces to NSol(Γ) because dup^{3} ∈iN ⊆〈Γ〉 and Corollary 13.
Finding Another Solution Closest to the Given One
In this section we study the optimization problem NearestOtherSolution. We first consider the polynomialtime cases and then the cases of higher complexity.
PolynomialTime Cases
Since we cannot take advantage of clone closure, we must proceed differently. We use the following result based on a theorem by Baker and Pixley [5].
Proposition 32 (Jeavons et al. [19])
Every bijunctive constraintR(x_{1},…,x_{n}) is equivalent to the conjunction\(\bigwedge _{1 \leq i \leq j} R_{ij}(x_{i},x_{j})\), whereR_{ij}is the projection ofRto the coordinatesiandj.
Proposition 33
If Γ is bijunctive (Γ ⊆iD_{2}) thenNOSol(Γ) is in PO.
Proof
According to Proposition 32 we may assume that the formula φ is a conjunction of atoms R(x, y) or a unary constraint R(x, x) of the form [x] or [¬x].
Unary constraints fix the value of the constrained variable and can be eliminated by propagating the value to the other clauses. For each of the remaining variables, x, we attempt to construct a model m_{x} of φ with m_{x}(x)≠m(x) such that hd(m_{x},m) is minimal among all models with this property. This can be done in polynomial time as described below. If the construction of m_{x} fails for every variable x, then m is the sole model of φ and the problem is not solvable. Otherwise choose one of the variables x for which hd(m_{x},m) is minimal and return m_{x} as second solution m^{′}.
It remains to describe the computation of m_{x}. Initially we set m_{x}(x) to 1 − m(x) and m_{x}(y) := m(y) for all variables y≠x, and mark x as flipped. If m_{x} satisfies all atoms we are done. Otherwise let R(u, v) be an atom falsified by m_{x}. If u and v are marked as flipped, the construction fails, a model m_{x} with the property m_{x}(x)≠m(x) does not exist. Otherwise R(u, v) contains a uniquely determined variable v not marked as flipped. Set m_{x}(v) := 1 − m(v), mark v as flipped, and repeat this step. This process terminates after flipping every variable at most once.
Proposition 34
If\({\Gamma } \subseteq \text {iS}_{00}^{k}\)or\({\Gamma } \subseteq \text {iS}_{10}^{k}\)forsomek ≥ 2 thenNOSol(Γ) is in PO.
Proof
We perform the proof only for \(\text {iS}_{00}^{k}\). Proposition 16 implies the same result for \(\text {iS}_{10}^{k}\).
The coclone \(\text {iS}_{00}^{k}\) is generated by Γ^{′} := {or^{k},[x → y],[x],[¬x]}. In fact, Γ^{′} is even a plain base of \(\text {iS}_{00}^{k}\) [12], meaning that every relation in Γ can be expressed as a conjunctive formula over relations in Γ^{′}, without existential quantification or explicit equalities. Hence we may assume that φ is given as a conjunction of Γ^{′}atoms.
Note that x ∨ y is a polymorphism of Γ^{′}, i.e., for any two solutions m_{1}, m_{2} of φ their disjunction m_{1} ∨ m_{2} – defined by (m_{1} ∨ m_{2})(x) = m_{1}(x) ∨ m_{2}(x) for all x – is also a solution of φ. Therefore we get the optimal solution m^{′} of an instance (φ, m) by flipping in m either some ones to zeros or some zeros to ones, but not both. To see this, assume the optimal solution m^{′} flips both ones and zeros. Then m^{′}∨ m is a solution of φ that is closer to m than m^{′}, which contradicts the optimality of m^{′}.
Unary constraints fix the value of the constrained variable and can be eliminated by propagating the value to the other clauses (including removal of disjunctions containing implied positive literals and shortening disjunctions containing implied negative literals). This propagation does not lead to contradictions since m is a model of φ. For each of the remaining variables, x, we attempt to construct a model m_{x} of φ with m_{x}(x)≠m(x) such that hd(m_{x},m) is minimal among all models with this property. This can be done in polynomial time as described below. If the construction of m_{x} fails for every variable x, then m is the sole model of φ and the problem is not solvable. Otherwise choose one of the variables x for which hd(m_{x},m) is minimal and return m_{x} as second solution m^{′}.
It remains to describe the computation of m_{x}. If m(x) = 0, we flip x to 1 and propagate this change iteratively along the implications, i.e., if x → y is a constraint of φ and m(y) = 0, we flip y to 1 and iterate. This kind of flip never invalidates any disjunctions, it could only lead to contradictions with conditions imposed by negative unit clauses (and since their values were propagated before such a contradiction would be immediate). For m(x) = 1 we proceed dually, flipping x to 0, removing x from disjunctions if applicable, and propagating this change backward along implications y → x where m(y) = 1. This can possibly lead to immediate inconsistencies with already inferred unit clauses, or it can produce contradictions through empty disjunctions, or it can create the necessity for further flips from 0 to 1 in order to obtain a solution (because in a disjunctive atom all variables with value 1 have been flipped, and thus removed). In all these three cases the resulting assignment does not satisfy φ, and there is no model that differs from m in x and that can be obtained by flipping in one way only. Otherwise, the resulting assignment satisfies φ, and this is the desired m_{x}. Our process terminates after flipping every variable at most once, since we flip only in one way (from zeros to ones or from ones to zeros). Thus, m_{x} is computable in polynomial time.
Hard Cases
Proposition 35
Let Γ be a constraint language. If iI_{1} ⊆〈Γ〉 or iI_{0} ⊆〈Γ〉 holds then it isNPcomplete to decide whether a feasible solution forNOSol(Γ) exists. Otherwise,NOSol(Γ) ∈polyAPX.
Proof
Finding a feasible solution to NOSol(Γ) corresponds exactly to the decision problem AnotherSAT(Γ) which is NPhard if and only if iI_{1} ⊆〈Γ〉 or iI_{0} ⊆〈Γ〉 according to Juban [20]. If AnotherSAT(Γ) is polynomialtime decidable, we can always find a feasible solution for NOSol(Γ) if it exists. Obviously, every feasible solution is an napproximation of the optimal solution, where n is the number of variables in the input.
Tightness Results
It will be convenient to consider the following decision problem asking for another solution that is not the complement, i.e., that does not have maximal distance from the given one.
 Problem: :

AnotherSAT_{nc}(Γ)
 Input: :

A conjunctive formula φ over relations from Γ and an assignment m satisfying φ.
 Question: :

Is there another satisfying assignment m^{′} of φ, different from m, such that hd(m, m^{′}) < n, where n is the number of variables in φ?
Remark 36
AnotherSAT_{nc}(Γ) is NPcomplete for iI_{0} ⊆〈Γ〉 and iI_{1} ⊆〈Γ〉, since already AnotherSAT(Γ) is NPcomplete for these cases, as shown in [20]. Moreover, AnotherSAT_{nc}(Γ) is polynomialtime decidable if Γ is Horn (Γ ⊆iE_{2}), dual Horn (Γ ⊆iV_{2}), bijunctive (Γ ⊆iD_{2}), or affine (Γ ⊆iL_{2}), for the same reason as for AnotherSAT(Γ): For each variable x_{i} we flip the value of m[i], substitute \(\overline {m}(x_{i})\) for x_{i}, and construct another satisfying assignment if it exists. Consider now the solutions which we get for every variable x_{i}. Either there is no solution for any variable, then AnotherSAT_{nc}(Γ) has no solution; or there are only the solutions which are the complement of m, then AnotherSAT_{nc}(Γ) has no solution as well; or else we get a solution m^{′} with hd(m, m^{′}) < n, leading also to a solution for AnotherSAT_{nc}(Γ). Hence, taking into account Proposition 38 below, we obtain a dichotomy result also for AnotherSAT_{nc}(Γ).
Note that AnotherSAT_{nc}(Γ) is not compatible with existential quantification. Let φ(y, x_{1},…,x_{n}) with model m be an instance of AnotherSAT_{nc}(Γ) and let m^{′} be a solution satisfying hd(m, m^{′}) < n + 1. Now consider the formula φ_{1}(x_{1},…,x_{n}) = ∃yφ(y, x_{1},…,x_{n}), obtained by existentially quantifying the variable y, and the tuples m_{1} and \(m^{\prime }_{1}\) obtained from m and m^{′} by omitting the first component. Both, m_{1} and \(m^{\prime }_{1}\), are still solutions of φ^{′}, but we cannot guarantee \(\text {hd}(m_{1}, m^{\prime }_{1}) < n\). Hence we need the equivalent of Proposition 15 for this problem, whose proof is analogous.
Proposition 37
The reductionAnotherSAT_{nc}(Γ^{′}) ≤_{m}AnotherSAT_{nc}(Γ) holds for all constraint languages Γ and Γ^{′}satisfying Γ^{′}⊆〈Γ〉_{∧}.
Proposition 38
If a constraint language Γ satisfies 〈Γ〉 = iI or iN ⊆〈Γ〉⊆iN_{2}, thenAnotherSAT_{nc}(Γ) is NPcomplete.
Proof
Containment in NP is clear, it remains to show hardness. Since the problem AnotherSAT_{nc} is not compatible with existential quantification, we cannot use clone theory, but have to consider the three coclones iN_{2}, iN, and iI separately and make use of minimal weak bases.
Case 〈Γ〉 = iN Putting R := {000,101,110}, we present a reduction from the problem AnotherSAT({R}), which is NPhard [20] as 〈{R}〉 = iI_{0}. The problem remains NPcomplete if we restrict it to instances (φ, 0), since R is 0valid and any given model m other than the constant 0assignment admits the trivial solution m^{′} = 0. Thus we can perform a reduction from this restricted problem.
Consider the relation R_{iN} = {0000,1010,1100,1111,0101,0011}. Given a formula φ over R, we construct a formula ψ over R_{iN} by replacing every constraint R(x, y, z) with a new constraint R_{iN}(x, y, z, w), where w is a new global variable. Moreover, we set m to the constant 0assignment. This construction is a manyone reduction from the restricted version of AnotherSAT({R}) to AnotherSAT_{nc}({R_{iN}}).
To see this, observe that the tuples in R_{iN} that have a 0 in the last coordinate are exactly those in R ×{0}. Thus any solution of φ can be extended to a solution of ψ by assigning 0 to w. Conversely we observe that any solution m^{′} of the AnotherSAT_{nc}({R_{iN}})instance (ψ, 0) is different from 0 and 1. As R_{iN} is complementive, we may assume m^{′}(w) = 0. Then m^{′} restricted to the variables of φ solves the AnotherSAT({R})instance (φ, 0).
Finally, observe that R_{iN} is a minimal weak base and Γ is a base of the coclone iN, therefore we have R_{iN} ∈〈Γ〉_{∧} by Theorem 1. Now the NPhardness of AnotherSAT_{nc}(Γ) follows from the one of AnotherSAT_{nc}({R_{iN}}) by Proposition 37.
Case 〈Γ〉 = iN_{2} We give a reduction from AnotherSAT_{nc}({R_{iN}}), which is NPhard by the previous case. By Theorem 1, 〈Γ〉_{∧} contains the relation \(R_{\text {iN}_{2}} = \{ m\overline {m}\mid {m \in R_{\text {iN}}}\}\). For an R_{iN}formula φ(x_{1},…,x_{n}), we construct a corresponding \(R_{\text {iN}_{2}}\)formula \(\psi (x_{1}, \ldots , x_{n}, x_{1}^{\prime }, \ldots , x_{n}^{\prime })\) by replacing every constraint R_{iN}(x, y, z, w) with a new constraint \(R_{\text {iN}_{2}}(x, y, z, w, x^{\prime }, y^{\prime }, z^{\prime }, w^{\prime })\). Assignments m for φ extend to assignments M for ψ by setting \(M(x^{\prime }):= \overline {m}(x)\). Conversely, assignments for ψ yield assignments for φ by restricting them to the variables in φ. Because every variable x_{1},…,x_{n} assigned by models of φ actually occurs in some R_{iN}atom in φ and hence in some \(R_{\text {iN}_{2}}\)atom of ψ, and because of the structure of \(R_{\text {iN}_{2}}\), any model of ψ distinct from M and \(\overline {M}\) restricts to a model of φ other than m or \(\overline {m}\). Consequently, this construction is again a reduction from \(\textsf {AnotherSAT}_{\textsf {nc}}(\{R_{\text {iN}}\})\) to \(\textsf {AnotherSAT}_{\textsf {nc}}(\{R_{\text {iN}_{2}}\})\), reducing itself to AnotherSAT_{nc}(Γ) by Proposition 37.
Case 〈Γ〉 = iI We proceed as in Case 〈Γ〉 = iN, but use R_{iI} = {0000,0011,0101,1111} instead of R_{iN}, and {000,011,101} for R. Note that the R_{iI}tuples with first coordinate 0 are exactly those in {0}× R. The relation R_{iI} is not complementive, but (as every variable assigned by any model of ψ occurs in some atomic R_{iI}constraint) the only solution m^{′} such that m^{′}(w) = 1 is the constant 1assignment, which is ruled out by the requirement hd(m, m^{′}) < n. Hence we may again assume m^{′}(w) = 0.
Proposition 39
For a constraint language Γ satisfying 〈Γ〉 = iI or iN ⊆〈Γ〉⊆iN_{2}and anyε > 0 there is no polynomialtimen^{1−ε}approximationalgorithm forNOSol(Γ), unless P = NP.
Proof
Assume there is a constant ε > 0 with a polynomialtime n^{1−ε}approximation algorithm for NOSol(Γ). We show how to use this algorithm to solve AnotherSAT_{nc}(Γ) in polynomial time. Proposition 38 completes the proof.
Let (φ, m) be an instance of AnotherSAT_{nc}(Γ) with n variables. If n = 1, then we reject the instance. Otherwise, we construct a new formula φ^{′} and a new assignment m^{′} as follows. Let k be the smallest integer greater than 1/ε. Choose a variable x of φ and introduce n^{k} − n new variables x^{i} for i = 1,…,n^{k} − n. For every i ∈{1,…,n^{k} − n} and every constraint R(y_{1},…,y_{ℓ}) in φ, such that x ∈{y_{1},…,y_{ℓ}}, construct a new constraint \(R({z_{1}^{i}}, \ldots , z_{\ell }^{i})\) by \({z_{j}^{i}} = x^{i}\) if y_{j} = x and \({z_{j}^{i}} = y_{j}\) otherwise; add all the newly constructed constraints to φ in order to get φ^{′}. Moreover, we extend m to a model of φ^{′} by setting m^{′}(x^{i}) = m(x). Now run the n^{1−ε}approximation algorithm for NOSol(Γ) on (φ^{′}, m^{′}). If the answer is \(\overline {m^{\prime }}\) then reject, otherwise accept.
We claim that the algorithm described above is a correct polynomialtime algorithm for the decision problem AnotherSAT_{nc}(Γ) when Γ is complementive. Polynomial runtime is clear. It remains to show its correctness. If the only solutions to φ are m and \(\overline {m}\), then, as n > 1, the only models of φ^{′} are m^{′} and \(\overline {m^{\prime }}\). Hence the approximation algorithm must answer \(\overline {m^{\prime }}\) and the output is correct. Now assume that there is a satisfying assignment m_{s} different from m and \(\overline {m}\). The relation [φ] is complementive, hence we may assume that m_{s}(x) = m(x). It follows that φ^{′} has a satisfying assignment \(m_{s}^{\prime }\) for which \(0<\text {hd}(m_{s}^{\prime }, m^{\prime })<n\) holds. But then the approximation algorithm must find a satisfying assignment m^{″} for φ^{′} with hd(m^{′}, m^{″}) < n ⋅ (n^{k})^{1−ε} = n^{k(1−ε)+ 1}. Since the inequality k > 1/ε holds, it follows that hd(m^{′}, m^{″}) < n^{k}. Consequently, m^{″} is not the complement of m^{′} and the output of our algorithm is again correct.
When Γ is not complementive but both 0valid and 1valid (〈Γ〉 = iI), we perform the expansion algorithm described above for each variable of the formula φ and reject if the result is the complement for each run. The runtime remains polynomial. If \([\varphi ] = \{m,\overline {m}\}\), then indeed every run results in the corresponding \(\overline {m^{\prime }}\), and we correctly reject. Otherwise, we have a model \(m_{s}\in [\varphi ]\smallsetminus \{m,\overline {m}\}\), so there is a variable x of φ, where \(m_{s}(x)\neq \overline {m}(x)\), i.e. m_{s}(x) = m(x). For this instance (φ^{′}, m^{′}) the approximation algorithm does not return \(\overline {m^{\prime }}\), wherefore we correctly accept.
MinDistanceEquivalent Cases
In this section we show that affine coclones lead to problems equivalent to MinDistance. We thereby add the missing details to the rather superficial treatment of this matter given in [6].
Lemma 40
For affine constraint languages Γ (Γ ⊆iL_{2}) we haveNOSol(Γ) ≤_{AP}MinDistance.
Proof
Let the formula φ and the satisfying assignment m be an instance of NOSol(Γ) over the variables x_{1},…,x_{n}. The input φ can be written as Ax = b, with m being a solution of this affine system. A tuple m^{′} is a solution of Ax = b if and only if it can be written as m^{′} = m + m_{0} where m_{0} is a solution of Ax = 0. The Hamming distance is invariant with respect to affine translations: namely we have hd(m^{′}, m) = hd(m^{′} + m^{″}, m + m^{″}) for any tuple m^{″}, in particular, for m^{″} = −m we obtain hd(m^{′}, m) = hd(m^{′}− m, 0). Therefore m^{′}≠m is a solution of Ax = b with minimal Hamming distance to m if and only if m_{0} = m^{′}− m is a nonzero solution of the homogeneous system Ax = 0 with minimum Hamming weight. Hence, the problem NOSol(Γ) for affine languages Γ is equivalent to computing the nontrivial solutions of homogeneous systems with minimal weight, which is exactly the MinDistance problem.
We need to express an affine sum of even number of variables by means of the minimal weak base for each of the affine coclones. In the following lemma, the existentially quantified variables are uniquely determined, therefore the existential quantifiers serve only to hide superfluous variables and do not pose any problems as they were mentioned before.
Lemma 41
For every\(n\in \mathbb {N}\),n ≥ 1, theconstraintx_{1} ⊕ x_{2} ⊕⋯ ⊕ x_{2n} = 0 can be equivalently expressed by each of the following formulas:

1.
\(\exists y_{0},\ldots ,y_{n} \left (\begin {array}{@{}l@{}} y_{0} = 0\land y_{n} = 0\land {}\\ R_{\text {iL}}(y_{0},x_{1},x_{2},y_{1}) \land {}\\ R_{\text {iL}}(y_{1},x_{3},x_{4},y_{2}) \land \cdots \land R_{\text {iL}}(y_{n1},x_{2n1},x_{2n},y_{n}) \end {array} \right )\),

2.
\(\exists y_{0},\ldots ,y_{2n} \left (\begin {array}{l@{}l@{}} R_{\text {iL}_{0}}(y_{0},x_{1},y_{1},y_{0}) \land {}\\ R_{\text {iL}_{0}}(y_{1},x_{2},y_{2},y_{0}) \land \cdots \land R_{\text {iL}_{0}}(y_{2n1},x_{2n},y_{2n},y_{2n}) \end {array} \right )\),

3.
\(\exists y_{0},\ldots ,y_{2n} \left (\begin {array}{l@{}l@{}} R_{\text {iL}_{1}}(y_{0},x_{1},y_{1},y_{0}) \land {}\\ R_{\text {iL}_{1}}(y_{1},x_{2},y_{2},y_{0}) \land \cdots \land R_{\text {iL}_{1}}(y_{2n1},x_{2n},y_{2n},y_{2n}) \end {array} \right )\),

4.
∃y_{0},…,y_{n},z_{0},…,z_{n},w_{1},…,w_{2n}\(\left (\begin {array}{l@{}l@{}} y_{0} = 0 \land y_{n} = 0 \land {}\\ R_{\text {iL}_{3}}(y_{0},x_{1},x_{2},y_{1},z_{0},w_{1},w_{2},z_{1}) \land {}\\ R_{\text {iL}_{3}}(y_{1},x_{3},x_{4},y_{2},z_{1},w_{3},w_{4},z_{2}) \land \cdots \land {}\\ R_{\text {iL}_{3}}(y_{n1},x_{2n1},x_{2n},y_{n},z_{n1},w_{2n1},w_{2n},z_{n}) \end {array} \right )\),

5.
∃y_{0},…,y_{2n},z_{0},…,z_{2n},w_{1},…,w_{2n}\(\left (\begin {array}{l@{}l@{}} R_{\text {iL}_{2}}(y_{0},x_{1},y_{1},z_{0},w_{1},z_{1},y_{0},z_{0}) \land {}\\ R_{\text {iL}_{2}}(y_{1},x_{2},y_{2},z_{1},w_{2},z_{2},y_{0},z_{0}) \land \cdots \land {}\\ R_{\text {iL}_{2}}(y_{2n1},x_{2n},y_{2n},z_{2n1},w_{2n},z_{2n}, y_{2n},z_{2n}) \end {array} \right )\),
where the number of existentially quantified variables is linearly bounded in thelength of the constraint. Note moreover that in each case any model ofx_{1} ⊕ x_{2} ⊕⋯ ⊕ x_{2n} = 0 uniquely determines the values of the existentially quantified variables.
Proof
Write out the constraint relations following the existential quantifiers as (conjunctions of) equalities. From this uniqueness of valuations for the existentially quantified variables is easy to see, and likewise that any model of \(\bigoplus _{i = 1}^{2n} x_{i} = 0\) also satisfies each of the formulas 1. up to 5. Adding up the equalities behind the existential quantifiers shows the converse direction.
The following lemma shows that MinDistance is APequivalent to a restricted version, containing only constraints generating the minimal weak base, for each coclone in the affine case.
Lemma 42
For each coclone\(\mathcal {B}\in \{\text {iL},\text {iL}_{0},\text {iL}_{1},\text {iL}_{2},\text {iL}_{3}\}\)wehave\(\text {\textsf {MinDistance}}\le _{\text {AP}} \text {\textsf {NOSol}}(\{R_{\mathcal {B}},[\neg x]\})\).
Proof
Consider a coclone \(\mathcal {B}\in \{\text {iL},\text {iL}_{0},\text {iL}_{1},\text {iL}_{2},\text {iL}_{3}\}\) and a MinDistanceinstance represented by a matrix \(A\in \mathbb {Z}_{2}^{k\times l}\). If one of the columns of A, say the ith, is zero, then the ith unit vector is an optimal solution to this instance with optimal value 1. Hence, we assume from now on that none of the columns equals a zero vector.
Every row of A expresses the fact that a sum of n ≤ l variables equals zero. If n is odd, we extend this sum to one with n + 1 summands, thereby introducing a new variable v, which we existentially quantify and confine to zero using a unary [¬x]constraint. Then we replace the expanded sum by the existential formula from Lemma 41 corresponding to the coclone \(\mathcal {B}\) under consideration. This way we have introduced only linearly many new variables in l for every row, and for any feasible solution for the MinDistanceproblem the values of the existential variables needed to encode it are uniquely determined. Thus, taking the conjunction over all these formulas we only have a linear growth in the size of the instance.
Next, we show how to deal with the existential quantifiers: First we transform the expression to prenex normal form getting a formula ψ of the form
which holds if and only if Ax = 0 for x = (x_{1},…,x_{l}). We use the same blowup construction regarding x_{1},…,x_{l} as in Proposition 7 and Lemma 11 to make the influence of y_{1},…,y_{p} on the Hamming distance negligible. For this we put J := {1,…,t} and introduce new variables \({x_{i}^{j}}\) where 1 ≤ i ≤ l and j ∈ J. If u is among x_{1},…,x_{l}, we define its blowup set to be \(B(u) = \{{x_{i}^{j}}{j\in J}\}\), otherwise, for u ∈{y_{1},…,y_{p}}, we set B(u) = {u}. Now for each atom R(u_{1},…,u_{q}) of φ we form the set of atoms \(\{R(u_{1}^{\prime },\dotsc ,u_{q}^{\prime }) {(u_{1}^{\prime },\dotsc ,u_{q}^{\prime })\in \prod _{i = 1}^{q} B(u_{i})}\}\), and define the quantifier free formula φ^{′} to be the conjunction of all atoms in the union of these sets. Note that this construction takes time polynomial in the size of ψ and hence in the size of the input MinDistanceinstance whenever t is polynomial in the input size because the atomic relations in ψ are at most octonary.
If s is an assignment of values to x making Ax = 0 true, we define \(s^{\prime }({x_{i}^{j}}):= s(x_{i})\) and extend this to a model of φ^{′} assigning the uniquely determined values to y_{1},…,y_{p}. Let m^{′} be the model arising in this way from the zero assignment m. If s^{′} is any model of φ^{′}, then for every 1 ≤ i ≤ l, all j ∈ J and each atom R(u_{1},…,u_{q}) of φ, s^{′} satisfies, in particular, the conjunction \(R(u_{1}^{\prime },\dotsc ,u_{q}^{\prime })\land R(u_{1}^{\prime \prime },\dotsc ,u_{q}^{\prime \prime })\) where for u ∈{u_{1},…,u_{q}} we have u^{′} = u^{″} = u if u ∈{y_{1},…,y_{p}}, \(u^{\prime }={x_{i}^{1}}\), \(u^{\prime \prime } = {x_{i}^{j}}\) if u = x_{i}, and \(u^{\prime }=u^{\prime \prime }={x_{k}^{1}}\) if u = x_{k} for some \(k\in \{1,\dotsc ,l\}\smallsetminus \{i\}\). Hence, the vectors \((s^{\prime }({x_{1}^{1}}),\dotsc ,s^{\prime }({x_{l}^{1}}))\) and \((s^{\prime }({x_{1}^{1}}),\dotsc ,s^{\prime }(x_{i1}^{1}),s^{\prime }({x_{i}^{j}}),s^{\prime }(x_{i + 1}^{1}),\dotsc , s^{\prime }({x_{l}^{1}}))\) both belong to the kernel of A and so does their difference, which is \(s^{\prime }({x_{i}^{j}})  s^{\prime }({x_{i}^{1}})\) times the ith unit vector. As the ith column of A is nonzero, we must have \(s^{\prime }({x_{i}^{j}}) = s^{\prime }({x_{i}^{1}})\). This also implies that if s^{′} is zero on \({x_{1}^{1}},\dotsc ,{x_{n}^{1}}\), then it must be zero on all \({x_{i}^{j}}\) (1 ≤ i ≤ l, j ∈ J) and thus it must coincide with m^{′}. Therefore, every feasible solution to the NOSolinstance (φ^{′}, m^{′}) yields a nonzero vector \((s^{\prime }({x_{1}^{1}}),\dotsc ,s^{\prime }({x_{l}^{1}}))\) in the kernel of A.
Further, if s^{′} is an rapproximation to an optimal solution, i.e., if hd(s^{′}, m^{′}) ≤ r OPT(φ^{′}, m^{′}), then, as \(s^{\prime }({x_{i}^{1}})=s^{\prime }({x_{i}^{j}})\) holds for all j ∈ J and all 1 ≤ i ≤ l, we obtain a solution to the MinDistance problem with Hamming weight w such that t ⋅ w ≤hd(s^{′}, m^{′}). Also, any optimal solution to the MinDistanceinstance can be extended to a notnecessarily optimal solution s^{″} of (φ^{′}, m^{′}), for which one can bound the distance to m^{′} as follows: OPT(φ^{′}, m^{′}) ≤hd(s^{″}, m^{′}) ≤ t ⋅OPT(A) + p. Combining these inequalities, we can infer t ⋅ w ≤ r ⋅ t ⋅OPT(A) + r ⋅ p, or w ≤OPT(A) ⋅ (r + r/OPT(A) ⋅ p/t). We noted above that p is linearly bounded in the size of the input, thus choosing t quadratic in the size of the input bounds w by OPT(A)(r + o(1)), whence we have an APreduction with α = 1.
Lemma 43
For constraint languages Γ, where one can decide the existence of and also find a feasible solution ofNOSol(Γ) in polynomial time, we have the reduction\(\text {\textsf {NOSol}}({\Gamma }) \le _{\text {AP}} \text {\textsf {NOSol}}(({\Gamma }\smallsetminus \{[x],[\neg x]\})\cup \{\approx \})\).
Proof
If an instance (φ, m) does not have feasible solutions, then it does not have nearest other solutions either. So we map it to the generic unsolvable instance ⊥. Consider now formulas φ over variables x_{1},…,x_{n} with models m where some feasible solution s_{0}≠m exists (and has been computed).
We can assume φ to be of the form \(\psi (x_{1},\dotsc ,x_{n}) \land \bigwedge _{i\in I_{1}} [x_{i}] \land \bigwedge _{i\in I_{0}} [\neg x_{i}]\), where ψ is a \(({\Gamma }\smallsetminus \{[x],[\neg x]\})\)formula and I_{1},I_{0} ⊆{1,…,n}. We transform φ to \(\varphi ^{\prime }:= \psi (x_{1},\dotsc ,x_{n}) \land \bigwedge _{i\in I_{1}} x_{i} \approx y_{1} \land \bigwedge _{i\in I_{0}} x_{i} \approx z_{1} \land \bigwedge _{i = 1}^{1+n^{2}} (y_{i} \approx y_{1} \land z_{i} \approx z_{1})\) and extend models of φ to models of φ^{′} in the natural way. Conversely, if s^{′} is a model of φ^{′} and s^{′}(y_{i}) = 1 and s^{′}(z_{i}) = 0 hold for all 1 ≤ i ≤ 1 + n^{2}, then we can restrict it to a model of φ. Other models of φ^{′} are not optimal and are mapped to s_{0}. It is not hard to see that this provides an APreduction with α = 1.
Proposition 44
For every constraint language Γ satisfying iL ⊆〈Γ〉⊆iL_{2}we haveMinDistance ≡_{AP}NOSol(Γ).
Proof
Since we lack compatibility with existential quantification, we shall deal with each coclone \(\mathcal {B} = \langle {\Gamma }\rangle \) in the interval {iL,iL_{0},iL_{1},iL_{2},iL_{3}} separately. First we perform the reduction from Lemma 42 to \(\textsf {NOSol}(\{R_{\mathcal {B}}, [\neg x]\})\). We need to find a reduction to \(\textsf {NOSol}(\{R_{\mathcal {B}}\})\) as this reduces to NOSol(Γ) by Proposition 15 and Theorem 1.
This is simple in the case of iL_{0} and iL_{2} since \([\neg x] = \{x\mid R_{\text {iL}_{0}}(x,x,x,x)\}\in \langle {\{R_{\text {iL}_{0}}\}\rangle }_{\land }\) (see Proposition 15) and \([\neg x] = \{x\mid \exists y(R_{\text {iL}_{2}}(x,x,x,y,y,y,x,y))\}\), where the existential quantifier can be handled by an APreduction with α = 1 which drops the quantifier and extends every model by assigning 1 to all previously existentially quantified variables. Thereby (optimal) distances between models do not change at all.
In the remaining cases, we reduce \(\textsf {NOSol}(\{R_{\mathcal {B}},[\neg x]\})\le _{\text {AP}} \textsf {NOSol}(\{R_{\mathcal {B}},[x],[\neg x]\})\) and the latter to \(\textsf {NOSol}(\{R_{\mathcal {B}}, \approx \})\) by Lemma 43, which now has to be reduced to \(\textsf {NOSol}(\{R_{\mathcal {B}}\})\). This is obvious for \(\mathcal {B} = \text {iL}\) where equality constraints x ≈ y can be expressed as R_{iL}(x, x, x, y) ∈〈{R_{iL}}〉_{∧} (cf. Proposition 15). For iL_{1} the same can be done using the formula \(\exists z(R_{\text {iL}_{1}}(x,y,z,z))\), where the existential quantifier can be removed by the same sort of simple APreduction with α = 1 as employed for iL_{2}. Finally, for iL_{3} we want to express equality as \(\exists u\exists v(R_{\text {iL}_{3}}(x,x,x,y,u,u,u,v))\). Here, in an APreduction, the quantifiers cannot simply be disregarded, as the values of the existentially quantified variables are not constant for all models. They are uniquely determined by the values of x and y for each particular model, though, which allows us to perform a similar blowup construction as in the proof of Lemma 42.
In more detail, given a \(\{R_{\text {iL}_{3}}, \approx \}\)formula ψ containing variables x_{1},…,x_{l}, first note that each atomic \(R_{\text {iL}_{3}}\)constraint \(R_{\text {iL}_{3}}(x_{1},\dotsc ,x_{8})\) can be represented as a linear system of equations, namely \(\oplus _{i = 1}^{4} x_{i} = 0\) and x_{i} ⊕ x_{i+ 4} = 1 for 1 ≤ i ≤ 4. Since equalities x_{i} ≈ x_{j} can be written as x_{i} ⊕ x_{j} = 0, the formula ψ is equivalent to an expression of the form Ax = b where x = (x_{1},…,x_{l}). Replacing each equality constraint by the existential formula above and bringing the result into prenex normal form, we get a formula ∃y_{1},…,y_{p}(φ(y_{1},…,y_{p},x_{1},…,x_{l})), which is equivalent to ψ and where φ is a conjunctive \(\{R_{\text {iL}_{3}}\}\)formula. By construction any two models of φ that agree on x_{1},…,x_{l} must coincide. Thus, introducing variables \({x_{i}^{j}}\) for 1 ≤ i ≤ l and j ∈ J := {1,…,t} and defining φ^{′} in literally the same way as in the proof of Lemma 42, any model s of ψ yields a model s^{′} of φ^{′} by putting \(s^{\prime }({x_{i}^{j}}):=s(x_{i})\) for 1 ≤ i ≤ l and j ∈ J and extending this with the unique values for y_{1},…,y_{p} satisfying φ(y_{1},…,y_{p},x_{1},…,x_{l}). In this way we obtain a model m^{′} of φ^{′} from a given solution m of ψ. Besides, if s^{′} is any model of φ^{′}, then as in Lemma 42, the vectors \((s^{\prime }({x_{1}^{1}}),\dotsc ,s^{\prime }({x_{l}^{1}}))\) and \((s^{\prime }({x_{1}^{1}}),\dotsc ,s^{\prime }(x_{i1}^{1}),s^{\prime }({x_{i}^{j}}),s^{\prime }(x_{i + 1}^{1}),\dotsc , s^{\prime }({x_{l}^{1}})))\) both satisfy ψ, and thus their difference is in the kernel of A. Since the variable x_{i} occurs in at least one of the atoms of ψ, the ith column of A is nonzero, implying that \(s^{\prime }({x_{i}^{j}}) = s^{\prime }({x_{i}^{1}})\) for j ∈ J and all 1 ≤ i ≤ l. Thus, any model s^{′}≠m^{′} of φ^{′} gives a model s≠m of ψ by defining \(s(x_{i}):= s^{\prime }({x_{i}^{1}})\) for all 1 ≤ i ≤ l.
The presented construction is an APreduction with α = 1, which can be proven completely analogously to the last paragraph of the proof of Lemma 42, choosing t quadratic in the size of ψ.
MinHornDeletionEquivalent Cases
As in Proposition 38 the need to use conjunctive closure instead of 〈 〉 causes a case distinction in the proof of the following result, which is the dual variant of [6, Lemma 16]. Correspondingly, Lemma 46 then replaces [6, Lemma 17].
Lemma 45
If Γ is exactly dual Horn (iV ⊆〈Γ〉⊆iV_{2}) then one of the following relations is in〈Γ〉_{∧}:[x → y],[x → y] ×{0},[x → y] ×{1}, or [x → y] ×{01}.
Proof
The coclone 〈Γ〉 is equal to iV, iV_{0}, iV_{1}, or iV_{2}. In the case 〈Γ〉 = iV the relation R_{iV} belongs to 〈Γ〉_{∧} by Theorem 1; because of R_{iV}(y, y, y, x) = [x → y] we have [x → y] ∈〈R_{iV}〉_{∧}⊆〈Γ〉_{∧}. The case 〈Γ〉 = iV_{1} leads to [x → y] ×{1}∈〈Γ〉_{∧} in an analogous manner. The cases 〈Γ〉 = iV_{0} and 〈Γ〉 = iV_{2} lead to [x → y] ×{0}∈〈Γ〉_{∧} and [x → y] ×{01}∈〈Γ〉_{∧}, respectively, by observing that [S_{1}(y, y, x)] = [S_{0}(¬y,¬y,¬x,¬y)] = [(¬y ∧¬y) ≈ (¬y ∧¬x)] = [x → y].
Lemma 46
If Γ is exactly dual Horn (iV ⊆〈Γ〉⊆iV_{2}), then the problemNOSol(Γ) isMinHornDeletionhard.
Proof
There are four cases to consider, namely 〈Γ〉∈{iV,iV_{0},iV_{1},iV_{2}}. For simplicity we only present the situation where 〈Γ〉 = iV_{1}; the case 〈Γ〉 = iV_{2} is very similar, and the other possibilities are even less complicated. At the end we shall give a few hints how to adapt the proof in these cases.
The basic structure of the proof is as follows: we choose a suitable weak base of iV_{1} consisting of an irredundant relation R_{1}, and identify a relation H_{1} ∈〈{R_{1}}〉_{∧} which allows us to encode a sufficiently complicated variant of the MinOnesproblem into NOSol({H_{1}}). Thus by Theorem 1 and Lemma 45 we have H_{1} ∈〈{R_{1}}〉_{∧}⊆〈Γ〉_{∧} and [x → y] ×{1}∈〈Γ〉_{∧}, wherefore Proposition 15 implies NOSol(Γ^{′}) ≤_{AP}NOSol(Γ) where Γ^{′} = {H_{1},[x → y] ×{1}}. According to [22, Theorem 2.14(4)], MinHornDeletion is equivalent to MinOnes(Δ) for constraint languages Δ being dual Horn, not 0valid and not implicative hitting set bounded+ with any finite bound, that is, if 〈Δ〉∈{iV_{1},iV_{2}}. The key point of the construction is to choose R_{1} and H_{1} in such a way that we can find a relation G_{1} satisfying iV_{1} ⊆〈{G_{1}}〉⊆iV_{2} and ((G_{1} ×{1}) ∪{0}) ×{1} = H_{1}. The latter property will allow us to prove an APreduction MinHornDeletion ≡_{AP}MinOnes({G_{1}}) ≤_{AP}NOSol(Γ^{′}), completing the chain.
We first check that R_{1} = V_{1} ∘〈χ_{4}〉 satisfies 〈{R_{1}}〉 = iV_{1}: namely, by construction, this relation is preserved by the disjunction and by the constant operation with value 1, i.e., 〈R_{1}〉⊆iV_{1}. This inclusion cannot be proper, since 0∉R_{1} (〈R_{1}〉⫅̸iI_{0}) and x ∨ (y ∧ z)∉R_{1} while x = (e_{1} ∘ β) ∨ (e_{4} ∘ β), y = (e_{1} ∘ β) ∨ (e_{2} ∘ β) and z = (e_{1} ∘ β) ∨ (e_{3} ∘ β) belong to V_{1} ∘〈χ_{4}〉 (cf. before Theorem 2 for the notation), i.e. the generating function (x, y, z)↦x ∨ (y ∧ z) of the clone S_{00} [13, Figure 2, p. 8] fails to be a polymorphism of R_{1}. For later we note that when β is chosen such that the coordinates of χ_{4} are ordered lexicographically (and we are going to assume this from now on), then this failure can already be observed within the first seven coordinates of R_{1}. Now according to Theorem 2, the sedenary relation R_{1} := V_{1} ∘〈χ_{4}〉 is a weak base relation for iV_{1} without duplicate coordinates, and a brief moment of inspection shows that none of them is fictitious either. Therefore, R_{1} is an irredundant weak base relation for iV_{1}. We define H_{1} to be {(x_{0},…,x_{8})∣(x_{0},…,x_{7},x_{8},…,x_{8}) ∈ R_{1}}, then clearly H_{1} ∈〈{R_{1}}〉_{∧}. Now we put \(G_{1} := G_{1}^{\prime }\smallsetminus \{\boldsymbol {0}\}\) where \(G_{1}^{\prime } := \{(x_{0},\dotsc ,x_{6})\mid {(x_{0},\dotsc ,x_{8})\in H_{1}}\}\), and one quickly verifies that ((G_{1} ×{1}) ∪{0}) ×{1} = H_{1}. Since \(G_{1}^{\prime }\in \langle {H_{1}}\rangle \subseteq \langle {R_{1}}\rangle = \text {iV}_{1}\) and removing the bottomelement 0 of a nontrivial joinsemilattice with topelement still yields a joinsemilattice with topelement, we have G_{1} ∈iV_{1}. With the analogous counterexample as for the relation R_{1} above, we can show that (x, y, z)↦x ∨ (y ∧ z) is not a polymorphism of G_{1} (because the nonmembership is witnessed among the first seven coordinates). Thus, 〈{G_{1}}〉 = iV_{1}; in particular G_{1}, and any relation conjunctively definable from it, is not 0valid.
For the reduction let now φ(x) = G_{1}(x_{1}) ∧⋯ ∧ G_{1}(x_{k}) be an instance of MinOnes({G_{1}}). We construct a corresponding Γ^{′}formula φ^{′} as follows.
where ℓ = x is the number of variables of φ, y and z are new global variables, and where we have written \((u\xrightarrow {w = 1}v)\) to denote ([x → y] ×{1})(u, v, w). Let m_{0} be the assignment to the ℓ^{2} + 2 variables of φ^{′} given by m_{0}(z) = 1 and m_{0}(x) = 0 elsewhere. It is clear that (φ^{′}, m_{0}) is an instance of NOSol(Γ^{′}), since m_{0} satisfies φ^{′}. The formula φ^{″′} only multiplies each variable x from φℓtimes and forces x ≈ x^{(2)} ≈⋯ ≈ x^{(ℓ)}, which is just a technicality for establishing an APreduction. The main idea of this proof is the correspondence between the solutions of φ and φ^{″}.
For each solution s of φ(x) there exists a solution s^{′} of φ^{″}(x, y) with s^{′}(y) = 1 (and s^{′}(z) = 1). Each solution s^{′} of φ^{″} has always s^{′}(z) = 1 and either s^{′}(y) = 0 or s^{′}(y) = 1. Because every variable from x is part of one of the x_{i}, the assignment m_{0} restricted to (x, y, z) is the only solution s^{′} of φ^{″} satisfying s^{′}(y) = 0. If otherwise s^{′}(y) equals 1, then s^{′} restricted to the variables x satisfies φ(x), following the correspondence between the relations G_{1} and H_{1}.
For r ∈ [1,∞) let s^{′} be an rapproximate solution of the \(\textsf {NOSol}(\Gamma ^{\prime })\)instance (φ^{′}, m_{0}). Let \(s := s^{\prime }\!\upharpoonright _{\boldsymbol {x}}\) be the restriction of s^{′} to the variables of φ. Since s^{′}≠m_{0}, by what we showed before, s^{′}(y) = 1 and s is a solution of φ(x). We have \(\text {OPT}(\varphi ^{\prime }, m_{0}) \geq 2\) and OPT(φ) ≥ 1, since solutions of the \(\textsf {NOSol}(\Gamma ^{\prime })\)instance \((\varphi ^{\prime },m_{0})\) must be different from m_{0}, whereby y is forced to have value 1, and \([\varphi ]\in \langle {\{G_{1}\}}\rangle _{\wedge }\) is not 0valid. Moreover, \(\text {hw}(s) = \text {hd}(\boldsymbol {0}, s)\), hd(s^{′}, m_{0}) = ℓ hw(s) + 1, \(\text {OPT}(\varphi ^{\prime },m_{0}) = \ell \text {OPT}(\varphi ) + 1\), and \(\text {hd}(s^{\prime },m_{0})\leq r\text {OPT}(\varphi ^{\prime },m_{0})\). From this and OPT(φ) ≥ 1 it follows that
Hence s is an (1 + α(r − 1) + o(1))approximate solution of the instance φ of the problem MinOnes({G_{1}}) where α = 2.
In the case when 〈Γ〉 = iV_{2}, the proof goes through with minor changes: \(R_{2} = \mathrm {V}_{2}\circ \langle {\chi _{4}}\rangle = R_{1}\smallsetminus \{\boldsymbol {1}\}\), so we define H_{2} and G_{2} like H_{1} and G_{1} just using R_{2} and H_{2} in place of R_{1} and H_{1}. Then we have \(H_{2} = H_{1}\smallsetminus \{\boldsymbol {1}\}\), \(G_{2} = G_{1}\smallsetminus \{\boldsymbol {1}\}\) and 〈{G_{2}}〉 = iV_{2}. Moreover, for the reduction we shall need an additional global variable w for φ^{″′} (and φ^{′}) since the encoding of the implication from Lemma 45 requires it (and forces it to zero in every model).
For 〈Γ〉 = iV_{0} we can use R_{0} = V_{0} ∘〈χ_{4}〉 = R_{2} ∪{0}; then, letting H_{0} = {(x_{0},…,x_{7})  (x_{0},…,x_{7},x_{7},…,x_{7}) ∈ R_{0}}∈〈{R_{0}}〉_{∧}, we have H_{0} = (G_{2} ×{1}) ∪{0}. On a side note, we observe that H_{0} = V_{0} ∘〈χ_{3}〉, which we can use alternatively without detouring via R_{0}. Given the relationship between G_{2} and H_{0}, we do not need the global variable z in the definition of φ^{″}, but we need to have it in the definition of φ^{″′}, where the relation given by Lemma 45 necessitates atoms of the form \((u\xrightarrow {z = 0}v)\) forcing z to zero in every model.
The case where 〈Γ〉 = iV is similar to the previous: we can use the irredundant weak base relation H = V ∘〈χ_{3}〉 = H_{0} ∪{1} = (G_{1} ×{1}) ∪{0}. Except for y in the definition of φ^{″} no additional global variables are needed in the definition of φ^{′}, because [u → v] atoms are directly available for φ^{″′}.
Corollary 47
If Γ is exactly Horn (iE ⊆〈Γ〉⊆iE_{2}) or exactly dualHorn (iV ⊆〈Γ〉⊆iV_{2}) thenNOSol(Γ) isMinHornDeletioncompleteunder APTuringreductions.
Proof
Hardness follows from Lemma 46 and duality. Moreover, NOSol(Γ) can be APTuringreduced to NSol(Γ ∪{[x],[¬x]}) as follows: Given a Γformula φ and a model m, we construct for every variable x of φ a formula \(\varphi _{x}= \varphi \land (x\approx \overline {m}(x))\). Then for every x where [φ_{x}]≠∅ we run an oracle algorithm for NSol(Γ ∪{[x],[¬x]}) on (φ_{x},m) and output one result of these oracle calls that is closest to m.
We claim that this algorithm provides indeed an APTuring reduction. To see this observe first that the instance (φ, m) has feasible solutions if and only if this holds for (φ_{x},m) and at least one variable x. Moreover, we have \(\text {OPT}(\varphi ,m) = \min _{x,[\varphi _{x}]\neq \emptyset }(\text {OPT}(\varphi _{x}, m))\). Let A(φ, m) be the answer of the algorithm on (φ, m) and let B(φ_{x},m) be the answers to the oracle calls. Consider a variable x^{∗} such that \(\text {OPT}(\varphi ,m) = \min _{x,[\varphi _{x}]\neq \emptyset }(\text {OPT}(\varphi _{x},m)) = \text {OPT}(\varphi _{x^{*}},m)\), and assume that \(B(\varphi _{x^{*}}, m)\) is an rapproximate solution of \((\varphi _{x^{*}},m)\). Then we get
Thus the algorithm is indeed an APTuringreduction from NOSol(Γ) to NSol(Γ ∪{[x],[¬x]}). Note that for Γ ⊆iV_{2} the problem NSol(Γ ∪{[x],[¬x]}) reduces to MinHornDeletion according to Propositions 29 and 27. Duality completes the proof.
Finding the Minimal Distance Between Solutions
In this section we study the optimization problem MinSolutionDistance. We first consider the polynomialtime cases and then the cases of higher complexity.
PolynomialTime Cases
We show that for bijunctive constraints the problem MinSolutionDistance can be solved in polynomial time. After stating the result we present an algorithm and analyze its complexity and correctness.
Proposition 48
If Γ is a bijunctive constraint language (Γ ⊆iD_{2}) then the problemMSD(Γ) is in PO.
By Proposition 32, an algorithm for bijunctive constraint languages Γ can be restricted to at most binary clauses. Alternatively, one can use the plain base
of iD_{2} exhibited in [12] to see that every relation in Γ can be written as a conjunction of disjunctions of two not necessarily distinct literals. We shall treat these disjunctions as one or twoelement sets of literals when extending the algorithm of Aspvall, Plass, and Tarjan [2] to compute the minimum distance between distinct models of a bijunctive constraint formula.
Complexity
The size of \(\mathcal {L}\) is linear in the number of variables, the reflexive closure can be computed in time linear in \({\mathcal {L}}\), the transitive closure in time cubic in \({\mathcal {L}}\), see [32]. The equivalence relation ∼ is the intersection of ≤ restricted to \(\mathcal {L}^{\prime }\) and its inverse (quadratic in \({\mathcal {L}^{\prime }}\)); from it we can obtain the partition \(\mathcal {L}^{\prime }/{\sim }\) in linear time in \({\mathcal {L}^{\prime }}\leq {\mathcal {L}}\), including the cardinalities of the equivalence classes and their minimization. Similarly, the remaining sets from the proof (\(\mathcal {V}_{0}\), \(\mathcal {V}_{1}\), their intersection and union, and thus also \(\mathcal {L}^{\prime }\)) can be computed with polynomial time complexity.
Correctness
The pairs in R arise from interpreting the atomic constraints in φ as implications. By transitivity of implication, the inequality u ≤ v for literals u, v means that every model m of φ satisfies the implication u → v or, equivalently, m(u) ≤ m(v). In particular, x ≤¬x implies m(x) = 0 and ¬x ≤ x implies m(x) = 1. Therefore \(\mathcal {V}_{0}\) can be seen to be the set of variables that have to be false in every model of φ, and \(\mathcal {V}_{1}\) the set of variables true in every model.
If \(\mathcal {V}_{0} \cap \mathcal {V}_{1} \neq \emptyset \) holds then the formula φ is inconsistent and has no solution. If \(\mathcal {V}_{0} \cup \mathcal {V}_{1} = \mathcal {V}\) holds, then every variable has a unique fixed value, hence φ has only one solution. Otherwise the formula is consistent and not all variables are fixed, hence there are at least two models.
To determine the minimal number of variables, whose values can be flipped between any two models of φ, it suffices to consider the literals without fixed value, \(\mathcal {L}^{\prime }\). If we have u ≤ v and v ≤ u, the literals are equivalent, u ∼ v, and must have the same value in every model. This means that any two distinct models have to differ on all literals of at least one equivalence class in \(\mathcal {L}^{\prime }/{\sim }\). Therefore, the return value of the algorithm is a lower bound for the minimal distance.
To prove that the return value can indeed be attained, we exhibit two models m_{0}≠m_{1} of φ having the least cardinality of any equivalence class in \(\mathcal {L}^{\prime }/{\sim }\) as their Hamming distance. Let \(L \in \mathcal {L}^{\prime }/{\sim }\) be a class of minimum cardinality. Define m_{0}(u) := 0 and m_{1}(u) := 1 for all literals u ∈ L. We extend this by setting m_{0}(w) := m_{1}(w) := 0 for all \(w\in \mathcal {L}\) such that w ≤ u for some u ∈ L, and by m_{0}(w) := m_{1}(w) := 1 for all \(w\in \mathcal {L}\) such that u ≤ w for some u ∈ L. For variables \(v\in \mathcal {V}\) satisfying v ≤¬v or ¬v ≤ v we have \(v\in \mathcal {V}_{0}\cup \mathcal {V}_{1}\), and thus \(v\notin \mathcal {L}^{\prime }\); in other words, for \([v]_{\sim } \in \mathcal {L}^{\prime }/{\sim }\) the classes [v]_{∼} and [¬v]_{∼} are incomparable. Thus, so far, we have not defined m_{0} and m_{1} on a variable \(v\in \mathcal {V}\) and on its negation ¬v at the same time. Of course, fixing a value for a negative literal ¬v implicitly means that we bind the assignment for \(v\in \mathcal {V}\) to the opposite value.
It remains to fix the value of literals in \(\mathcal {L}^{\prime }\) that are neither related to the literals in L nor have fixed values in all models. Suppose \((\bar u, v) \in R\) is a constraint such that the value of at least one literal has not yet been defined. There are three cases: either both literals have not yet received a value, or \(\bar u\) is undefined and v has been assigned the value 1 (either as a fixed value in all models or because of being greater than a literal in L or because of being lesser than a complement of a literal in L), or v is undefined and \(\bar u\) has been assigned the value 0 (either as a fixed value in all models or because of being smaller than a literal in L or greater than a complement of a literal in L). All three cases can be handled by defining both models, m_{0} and m_{1}, on the remaining variables identically: starting with a minimal literal u, where m_{0} and m_{1} are not yet defined, we assign \(m_{0}(u) := m_{1}(\overline {u}) := 0\) and \(m_{1}(u):= m_{0}(\overline {u}):= 1\).
This way none of the constraints is violated, and m_{0} and m_{1} are distinct only on variables corresponding to literals in L. Iterate this procedure until all variables (and their complements) have been assigned values. If \(\mathcal {L}^{\prime \prime }\subseteq \mathcal {L}^{\prime }\) denotes the literals remaining after propagating the values of m_{0} and m_{1} on L, then the presented method can be implemented by partitioning \(\mathcal {L}^{\prime \prime }\) into two classes L_{0} and L_{1} such that \(L_{0}\cap \{u,\overline {u}\}\) is a singleton for every \(u\in \mathcal {L}^{\prime \prime }\) and each weakly connected component of the quasiordered set \((\mathcal {L}^{\prime \prime },\leq )\) is either a subset of L_{0} or L_{1}. Then set m_{0} and m_{1} to k on the literals belonging to L_{k} for k ∈{0,1}.
By construction, m_{0} differs from m_{1} only in the variables corresponding to the literals in L, so their Hamming distance is L as desired. Moreover, both assignments respect the order constraints in \((\mathcal {L},\leq )\). As these faithfully reflect all original atomic constraints, m_{0} and m_{1} are indeed models of φ.
Proposition 49
If Γ is a Horn (Γ ⊆iE_{2}) or a dual Horn (Γ ⊆iV_{2}) constraint language thenMSD(Γ) is in PO.
We only discuss the Horn case (Γ ⊆iE_{2}), dual Horn (Γ ⊆iV_{2}) being symmetric. The following algorithm improves the description given in [6] by correctly treating two marginal cases where the output is evident.
Complexity
The runtime of the algorithm is polynomial in the number of clauses in φ: Unit resolution/subsumption can be applied at most once for each variable, and hyperresolution has to be applied at most once for each variable x and each clause ¬y_{1} ∨⋯ ∨¬y_{k} ∨ z and ¬y_{1} ∨⋯ ∨¬y_{k}.
Correctness
Adding resolvents and removing subsumed clauses maintains logical equivalence, therefore \(\mathcal {D}\cup \mathcal {U}\) is logically equivalent to φ, i.e., both clause sets have the same models. We note that the sets of variables of \(\mathcal {U}\) and of \(\mathcal {D}\) are disjoint. The unit clauses in \(\mathcal {U}\) are always (uniquely) satisfiable, thus \(\mathcal {D}\) and φ are equisatisfiable. Therefore, if \(\mathcal {D}\) contains the empty clause, φ is also unsatisfiable; otherwise \(\mathcal {D}\) is satisfiable, e.g., by assigning 0 to every \(x\in \mathcal {V}\). In this case, if \(\mathcal {U}\) contains a literal for every variable of φ, the unit clauses in \(\mathcal {U}\) define a unique model of φ.
Otherwise φ has at least two models m_{1}≠m_{2}. In the simplest case some variable x in φ has been left unconstrained by \(\mathcal {D}\) and \(\mathcal {U}\); in this case we can pick any model of \(\mathcal {D}\) and \(\mathcal {U}\) and extend it to two different models of φ with Hamming distance 1 by setting m_{1}(x) = 0 and m_{2}(x) = 1 and setting m_{1}(y) = m_{2}(y) = 0 for any other variable y outside \(\mathcal {D}\) and \(\mathcal {U}\). For the remaining situations it is sufficient to consider the models of \(\mathcal {D}\) only, as each model m of \(\mathcal {D}\) uniquely extends to a model of φ by defining m(x) = 1 for \((x)\in \mathcal {U}\) and m(x) = 0 for \((\neg x)\in \mathcal {U}\); hence the minimal Hamming distances of the models of φ and \(\mathcal {D}\) will be the same.
We are thus looking for models m_{1},m_{2} of \(\mathcal {D}\) such that the size of the difference set Δ(m_{1},m_{2}) = {x∣m_{1}(x)≠m_{2}(x)} is minimal. In fact, since the models of Horn formulas are closed under minimum, we may assume m_{1} < m_{2}, i.e., we have m_{1}(x) = 0 and m_{2}(x) = 1 for all variables x ∈Δ(m_{1},m_{2}). Indeed, given two models m_{2} and \(m_{2}^{\prime }\) of \(\mathcal {D}\) where neither \(m_{2}\leq m_{2}^{\prime }\) nor \(m_{2}^{\prime }\leq m_{2}\), \(m_{1}= m_{2} \land m_{2}^{\prime }\) is also a model, and it is distinct from m_{2}. Since \(\text {hd}(m_{1},m_{2}) \leq \text {hd}(m_{2},m_{2}^{\prime })\), the minimal Hamming distance will occur between models m_{1} and m_{2} satisfying m_{1} < m_{2}.
Note the following facts regarding the equivalence relation ∼ and dependent variables.

If x ∼ y then the two variables must have the same value in every model of \(\mathcal {D}\) in order to satisfy the implications ¬x ∨ y and ¬y ∨ x. This means that for all models m of \(\mathcal {D}\) and all \(X\in \mathcal {V}/\sim \), we have either m(x) = 0 for all x ∈ X or m(x) = 1 for all x ∈ X.

The dependence of variables is acyclic: If, for some l ≥ 2, for every 1 ≤ i < l we have that z_{i} depends on variables including one, say y_{i}, which is equivalent to z_{i+ 1}, and z_{l} = z_{1}, then there is a cycle of binary implications between the variables and thus z_{i} ∼ y_{i} ∼ z_{j} for all i, j, contradicting the definition of dependence.

If a variable z depending on y_{1}, …, y_{k} belongs to a difference set Δ(m_{1},m_{2}), then at least one of the y_{i}s also has to belong to Δ(m_{1},m_{2}): m_{2}(z) = 1 implies m_{2}(y_{j}) = 1 for all j = 1,…,k (because of the clauses ¬z ∨ y_{i}), and m_{1}(z) = 0 implies m_{1}(y_{i}) = 0 for at least one i (because of the clause ¬y_{1} ∨⋯ ∨¬y_{k} ∨ z). Therefore Δ(m_{1},m_{2}) is the union of at least two sets in \(\mathcal {V}/\sim \), namely the equivalence class of z and the one of y_{i}.

If some z_{1} ∈Δ(m_{1},m_{2}) is equivalent to a variable \(z_{1}^{\prime }\) that depends on some other variables, then we have a variable z_{2} among them, which also belongs to Δ(m_{1},m_{2}). If the equivalence class of z_{2} still contains a variable \(z_{2}^{\prime }\) depending on other variables, we can iterate this procedure. In this way we obtain a sequence \(z_{1}\sim z_{1}^{\prime }, z_{2}\sim z_{2}^{\prime }, z_{3}\sim z_{3}^{\prime }, \dotsc \) where \(z_{i}^{\prime }\) depends on variables including z_{i+ 1}, which is equivalent to \(z_{i + 1}^{\prime }\). Because there are only finitely many variables and because of acyclicity, after a linear number of steps we must reach a variable z_{n} ∈Δ(m_{1},m_{2}) such that its equivalence class (being a subset of the difference set) does not contain any dependent variables.
Hence the difference between any two models cannot be smaller than the cardinality of the smallest set in \(\mathcal {V}/\sim \) without dependent variables. It remains to show that we can indeed find two such models.
Let X be a set in \(\mathcal {V}/\sim \) which has minimal cardinality among the sets without dependent variables, and let m_{0},m_{1} be interpretations defined as follows: (1) m_{0}(y) = 0 and m_{1}(y) = 1 if y ∈ X; (2) m_{0}(y) = 1 and m_{1}(y) = 1 if y∉X and \((\neg x\lor y)\in \mathcal {D}\) for some x ∈ X; (3) m_{0}(y) = 0 and m_{1}(y) = 0 otherwise. We have to show that m_{0} and m_{1} satisfy all clauses in \(\mathcal {D}\). Let m be any of these models. \(\mathcal {D}\) contains two types of clauses.

Type 1:
Horn clauses with a positive literal ¬y_{1} ∨⋯ ∨¬y_{k} ∨ z. If m(y_{i}) = 0 for any i, we are done. So suppose m(y_{i}) = 1 for all i = 1,…,k; we have to show m(z) = 1. The condition m(y_{i}) = 1 means that either y_{i} ∈ X (for m = m_{1}) or that there is a clause \((\neg x_{i} \lor y_{i})\in \mathcal {D}\) for some x_{i} ∈ X. We distinguish the two cases z ∈ X and z∉X.
Let z ∈ X. If z ∼ y_{i} for any i, we are done for we have m(z) = m(y_{i}) = 1. So suppose z≁y_{i} for all i. As the elements in X, in particular z and the x_{i}s, are equivalent and the binary clauses are closed under resolution, \(\mathcal {D}\) contains the clause ¬z ∨ y_{i} for all i. But this would mean that z is a variable depending on the y_{i}s, contradicting the assumption z ∈ X.
Let z∉X, and let x ∈ X. As the elements in X are equivalent and the binary clauses are closed under resolution, \(\mathcal {D}\) contains ¬x ∨ y_{i} for all i. Closure under hyperresolution with the clause ¬y_{1} ∨⋯ ∨¬y_{k} ∨ z means that \(\mathcal {D}\) also contains ¬x ∨ z, whence m(z) = 1.

Type 2:
Horn clauses with only negative literals ¬y_{1} ∨⋯ ∨¬y_{k}. If m(y_{i}) = 0 for any i, we are done. It remains to show that the assumption m(y_{i}) = 1 for all i = 1,…,k leads to a contradiction. The condition m(y_{i}) = 1 means that either y_{i} ∈ X (for m = m_{1}) or that there is a clause \((\neg x_{i}\lor y_{i})\in \mathcal {D}\) for some x_{i} ∈ X. Let x be some particular element of X. Since the elements in X are equivalent and the binary clauses are closed under resolution, \(\mathcal {D}\) contains the clause ¬x ∨ y_{i} for all i. But then a hyperresolution step with the clause ¬y_{1} ∨⋯ ∨¬y_{k} would yield the unit clause ¬x, which by construction does not occur in \(\mathcal {D}\). Therefore at least one y_{i} is neither in X nor part of a clause ¬x ∨ y_{i} with x ∈ X, i.e., m(y_{i}) = 0.
Hard Cases
Two Solution Satisfiability
In this section we study the feasibility problem of MSD(Γ) which is, given a Γformula φ, to decide if φ has two distinct solutions.
 Problem: :

TwoSolutionSAT(Γ)
 Input: :

A conjunctive formula φ over the relations from the constraint language Γ.
 Question: :

Are there two satisfying assignments m≠m^{′} of φ?
A priori it is not clear that the tractability of TwoSolutionSAT is fully characterized by coclones. The problem is that the implementation of relations of some language Γ by another language Γ^{′} might not be parsimonious, that is, in the implementation one solution to a constraint might be blown up into several ones in the implementation. Fortunately we can still determine the tractability frontier for TwoSolutionSAT by combining the corresponding results for SAT and AnotherSAT.
Lemma 50
Let Γ be a constraint language for whichSAT(Γ) is NPhard. Then the problemTwoSolutionSAT(Γ) is NPhard.
Proof
Since SAT(Γ) is NPhard, there must be a relation R in Γ having more than one tuple, because every relation containing only one tuple is at the same time Horn, dual Horn, bijunctive, and affine. Given an instance φ for SAT(Γ), construct φ^{′} as φ ∧ R(y_{1},…,y_{ℓ}) where ℓ is the arity of R and y_{1},…,y_{ℓ} are new variables not appearing in φ. Obviously, φ has a solution if and only if φ^{′} has at least two solutions. Hence, we have proved SAT(Γ) ≤_{m}TwoSolutionSAT(Γ).
Lemma 51
Let Γ be a constraint language for whichAnotherSAT(Γ) is NPhard. Then the problemTwoSolutionSAT(Γ) is NPhard.
Proof
Let a Γformula φ and a satisfying assignment m be an instance of the problem AnotherSAT(Γ). Then φ has a solution other than m if and only if it has two distinct solutions.
Lemma 52
Let Γ be a constraint language for which both problemsSAT(Γ) andAnotherSAT(Γ) are in P. ThenTwoSolutionSATis also in P.
Proof
Let φ be an instance of TwoSolutionSAT(Γ). All polynomialtime decidable cases of SAT(Γ) are constructive, i.e., whenever that problem is polynomialtime decidable, there exists a polynomialtime algorithm computing a satisfying assignment provided it exists. If φ is not satisfiable, we reject the instance. Otherwise, we can compute in polynomial time a satisfying assignment m of φ. Now use the algorithm for AnotherSAT(Γ) on the instance (φ, m) to decide if there is a second solution to φ.
Corollary 53
For any constraint language Γ, the problemTwoSolutionSAT(Γ) is inP if bothSAT(Γ) andAnotherSAT(Γ) are in P. Otherwise,TwoSolutionSAT(Γ) is NPhard.
Proposition 54
Let Γ be a constraint language for whichTwoSolutionSAT(Γ) is in P. Then there is a polynomialtimenapproximationalgorithm forMSD(Γ), wherenis the number of variables of the Γformulaon input.
Proof
Since TwoSolutionSAT(Γ) is in P, both SAT(Γ) and AnotherSAT(Γ) must be in P by Corollary 53. Since SAT(Γ) is in P, we can compute a model m of the input φ in polynomial time if it exists. Now we check the AnotherSAT(Γ)instance (φ, m). If it has a solution m^{′}≠m, it is also polynomial time computable, and we return (m, m^{′}). If we fail somewhere in this process, then the MSD(Γ)instance φ does not have feasible solutions; otherwise, hd(m, m^{′}) ≤ n ≤ n ⋅OPT(φ).
MinDistanceEquivalent Cases
In this section we show that, as for the NearestOtherSolution problem, the affine cases of MSD are MinDistancecomplete.
Proposition 55
MSD(Γ) isMinDistancecomplete if the constraint language Γ satisfies the inclusions iL ⊆〈Γ〉⊆iL_{2}.
Proof
We prove MSD(Γ) ≡_{AP}NearestOtherSolution(Γ), which is MinDistancecomplete for each constraint language Γ satisfying the inclusions iL ⊆〈Γ〉⊆iL_{2}, according to Proposition 44. As the inclusion Γ ⊆iL_{2} = 〈{even^{4},[x],[¬x]}〉 holds, any Γformula ψ is expressible as ∃y(A_{1}x + A_{2}y ≈ c). The projection of the affine solution space is again an affine space, so it can be understood as solutions of a system Ax = b. If (ψ, m_{0}) is an instance of NOSol(Γ), then ψ is a MSD(Γ)instance, and a feasible solution m_{1}≠m_{2} satisfying ψ gives a feasible solution m_{3} := m_{0} + (m_{2} − m_{1}) for (ψ, m_{0}), where hd(m_{0},m_{3}) = hd(m_{2},m_{1}). Conversely, a solution m_{3}≠m_{0} to (ψ, m_{0}) yields a feasible answer to the MSDinstance ψ. Thus, OPT(ψ) = OPT(ψ, m_{0}) and so NOSol(Γ) ≤_{AP}MSD(Γ). The other way round, if ψ is an MSDinstance, then attempt to solve the system Ax = b defined by it; if there is no or a unique solution, then the instance does not have feasible solutions. Otherwise, we have at least two distinct models of ψ; let m_{0} be one of these. As above we conclude OPT(ψ) = OPT(ψ, m_{0}), and therefore, MSD(Γ) ≤_{AP}NOSol(Γ).
Tightness Results
We prove that Proposition 54 is essentially tight for some constraint languages. This result builds heavily on the previous results from Section 6.2.1.
Proposition 56
For a constraint language Γ satisfying the inclusions iN ⊆〈Γ〉⊆iI and anyε > 0 there is no polynomialtimen^{1−ε}approximationalgorithm forMSD(Γ), unless P = NP.
Proof
We show that any polynomial time n^{1−ε}approximation algorithm for MSD(Γ) would also allow to decide in polynomial time the problem AnotherSAT_{nc}(Γ), which is NPcomplete by Proposition 38.
The algorithm works as follows. Given an instance (φ, m) for AnotherSAT_{nc}(Γ), the algorithm accepts if m is not a constant assignment. Since Γ is 0valid (and 1valid), this output is correct. If φ has only one variable, reject because φ has only two models; otherwise, proceed as follows.
For each variable x of φ, we construct a new formula \(\varphi ^{\prime }_{x}\) as follows. Let k be the smallest integer greater than 1/ε. Introduce n^{k} − n new variables x^{i} for i = 1,…,n^{k} − n. For every i ∈{1,…,n^{k} − n} and every constraint R(y_{1},…,y_{ℓ}) in φ, such that x ∈{y_{1},…,y_{ℓ}}, construct a new constraint \(R({z_{1}^{i}}, \ldots , z_{\ell }^{i})\) by \({z_{j}^{i}} = x^{i}\) if y_{j} = x and \({z_{j}^{i}} = y_{j}\) otherwise; add all the newly constructed constraints to φ in order to get \(\varphi ^{\prime }_{x}\). Note, that we can extend models s of φ to models s^{′} of \(\varphi ^{\prime }_{x}\) by setting s^{′}(x^{i}) = s(x). In particular, this can be done for m, yielding \(m^{\prime }\in [\varphi ^{\prime }_{x}]\). As Γ ⊆iI = iI_{0} ∩iI_{1}, the MSD(Γ)instance \(\varphi ^{\prime }_{x}\) has feasible solutions; thus run the n^{1−ε}approximation algorithm for MSD(Γ) on \(\varphi ^{\prime }_{x}\). If for every x the answer is a pair (m_{1},m_{2}) with \(m_{2} = \overline {m_{1}}\), then reject, otherwise accept.
This procedure is a correct polynomialtime algorithm for AnotherSAT_{nc}(Γ). For polynomial runtime is clear, it remains to show correctness. If φ has only constant models, then the same is true for every \(\varphi ^{\prime }_{x}\) since φ contains a variable distinct from x. Thus each approximation must result in a pair of complementary constant assignments, and the output is correct. Assume now that there is a model s of φ different from 0 and 1. Hence, there exists a variable x such that s(x) = m(x) because m is constant. It follows that \(\varphi ^{\prime }_{x}\) has a model s^{′} fulfilling \(\text {OPT}(\varphi ^{\prime }_{x})\leq \text {hd}(s^{\prime }, m^{\prime })<n\), where n is the number of variables of φ. But then the approximation algorithm must find two distinct models m_{1}≠m_{2} of \(\varphi ^{\prime }_{x}\) satisfying hd(m_{1},m_{2}) < n ⋅ (n^{k})^{1−ε} = n^{k(1−ε)+ 1}. Since we stipulated k > 1/ε, it follows that hd(m_{1},m_{2}) < n^{k}. Consequently, we have \(m_{2}\neq \overline {m_{1}}\) and the output of our algorithm is again correct.
Concluding Remarks
The problems investigated in this paper are quite natural. In the space of bitvectors we search for a solution of a formula that is closest to a given point, or for a solution next to a given solution, or for two solutions witnessing the smallest Hamming distance between any two solutions. Our results describe the complexity of exploring the solution space for arbitrary families of Boolean relations. Moreover, our problems generalize problems familiar from the literature: MinOnes, NearestCodeword, and DistanceSAT are instances of our NearestSolution, while MinDistance is the same as our problem MinSolutionDistance when restricting the latter to affine relations.
To prove the results, we first had to extend the notion of APreduction. The optimization problems considered in the literature have the property that each instance has at least one feasible solution. This is not the case when looking for nearest solutions regarding a given solution or a prescribed Boolean tuple, as a formula may have just a single solution or no solution at all. Therefore we had to refine the notion of APreductions such that it correctly handles instances without feasible solutions.
The complexity of NearestSolution can be classified by the usual approach: We first show that for each constraint language the complexity of the problem does not change when admitting existential quantifiers and equality, and then check all finitely related clones according to Post’s lattice. This approach does not work for the problems NearestOtherSolution and MinSolutionDistance: It does not seem to be possible to show a priori that the complexity remains unaffected under such language extensions. In principle the complexity of a problem might well differ for two constraint languages Γ_{1} and Γ_{2} that represent the same clone (〈Γ_{1}〉 = 〈Γ_{2}〉) but that differ with respect to partial polymorphisms (〈Γ_{1} ∪{≈}〉_{∧}≠〈Γ_{2} ∪{≈}〉_{∧}). Theorems 4 and 5 finally show that this is not the case, but we learn this only a posteriori. Our method of proof fundamentally relies on irredundant weak bases that seem to be the perfect fit for such a situation: a priori compatibility with existential quantification is not required, but it will follow once the proof succeeds just using weak bases.
Figure 4 compares the complexity classifications of the three problems. Regarding NearestSolution and NearestOtherSolution, knowing that an assignment is a solution apparently helps in finding a solution nearby. For expressive constraint languages it is NPcomplete to decide whether a feasible solution exists at all; for NearestSolution this requires the existence of at least one satisfying assignment, while the other two problems need even two. Kann proved in [21] that MinOnes(Γ) is NPOPBcomplete for 〈Γ〉 = BR, where NPOPB is the class of NPO problems with a polynomially bounded objective function. This result implies that NearestSolution(Γ) is NPOPBcomplete for 〈Γ〉 = BR as well. It is unclear whether this result also holds for 〈Γ〉 = iN_{2}. It may be possible to find a suitable constraint language Γ^{′} satisfying 〈Γ^{′}〉 = BR such that MinOnes(Γ^{′}) reduces to NearestOtherSolution(Γ) for iI_{0} ⊆〈Γ〉 or iI_{1} ⊆〈Γ〉, proving thus that NOSol(Γ) is NPOPBcomplete for these cases. Likewise, the NPOPBhardness of MSD(Γ) for iN_{2} ⊆〈Γ〉 or iI_{0} ⊆〈Γ〉 or iI_{1} ⊆〈Γ〉 remains open for the time being.
References
Arora, S., Babai, L., Stern, J., Sweedyk, Z.: The hardness of approximate optima in lattices, codes, and systems of linear equations. J. Comput. Syst. Sci. 54 (2), 317–331 (1997)
Aspvall, B., Plass, M.R., Tarjan, R.E.: A lineartime algorithm for testing the truth of certain quantified Boolean formulas. Inf. Process. Lett. 8(3), 121–123 (1979)
Ausiello, G., Crescenzi, P., Gambosi, G., Kann, V., MarchettiSpaccamela, A., Protasi, M.: Complexity and Approximation: Combinatorial Optimization Problems and Their Approximability Properties. Springer, Berlin (1999)
Bailleux, O., Marquis, P.: Some computational aspects of DistanceSAT. J. Autom. Reason. 37(4), 231–260 (2006)
Baker, K.A., Pixley, A.F.: Polynomial interpolation and the Chinese Remainder Theorem for algebraic systems. Math. Z. 143(2), 165–174 (1975)
Behrisch, M., Hermann, M., Mengel, S., Salzer, G.: Give me another one! In: Elbassioni, K., Makino, K. (eds.) Proceedings 26th International Symposium on Algorithms and Computation (ISAAC 2015), Nagoya (Japan), Lecture Notes in Computer Science, vol. 9472, pp. 664–676. Springer, Berlin (2015)
Behrisch, M., Hermann, M., Mengel, S., Salzer, G.: As close as it gets. In: Kaykobad, M., Petreschi, R. (eds.) Proceedings 10th International Workshop on Algorithms and Computation (WALCOM 2016), Kathmandu (Nepal), Lecture Notes in Computer Science, vol. 9627, pp. 222–235. Springer (2016)
Behrisch, M., Hermann, M., Mengel, S., Salzer, G.: The next whisky bar. In: Kulikov, A.S., Woeginger, G.J. (eds.) Proceedings 11th International Computer Science Symposium in Russia (CSR 2016), St. Petersburg (Russia), Lecture Notes in Computer Science, vol. 9691, pp. 41–56. Springer, Berlin (2016)
Böhler, E., Creignou, N., Reith, S., Vollmer, H.: Playing with Boolean blocks, part II: constraint satisfaction problems. SIGACT News Complex. Theory Column 43 35(1), 22–35 (2004)
Böhler, E., Reith, S., Schnoor, H., Vollmer, H.: Bases for Boolean coclones. Inf. Process. Lett. 96(2), 59–66 (2005)
Creignou, N., Khanna, S., Sudan, M.: Complexity classifications of Boolean constraint satisfaction problems. In: SIAM Monographs on Discrete Mathematics and Applications, vol. 7. SIAM, Philadelphia (2001)
Creignou, N., Kolaitis, P.G., Zanuttini, B.: Structure identification of Boolean relations and plain bases for coclones. J. Comput. Syst. Sci. 74(7), 1103–1115 (2008)
Creignou, N., Vollmer, H.: Boolean constraint satisfaction problems: when does post’s lattice help? In: Creignou, N., Kolaitis, P.G., Vollmer, H. (eds.) Complexity of Constraints—An Overview of Current Research Themes [Result of a Dagstuhl Seminar], Lecture Notes in Computer Science, vol. 5250, pp. 3–37. Springer, Berlin (2008)
Crescenzi, P., Rossi, G.: On the Hamming distance of constraint satisfaction problems. Theor. Comput. Sci. 288(1), 85–100 (2002)
Dalal, M.: Investigations into a theory of knowledge base revision. In: Shrobe, H.E., Mitchell, T.M., Smith, R.G. (eds.) Proceedings 7th National Conference on Artificial Intelligence (AAAI88), St. Paul (Minnesota, USA), pp. 475–479. AAAI Press/MIT Press (1988)
Dumer, I., Micciancio, D., Sudan, M.: Hardness of approximating the minimum distance of a linear code. IEEE Trans. Inf. Theory 49(1), 22–37 (2003)
Hochbaum, D.S., Megiddo, N., Naor, J., Tamir, A.: Tight bounds and 2approximation algorithms for integer programs with two variables per inequality. Math. Program. 62(1–3), 69–83 (1993)
Janov, Ju.I., Mučnik, A.A.: , [Existence of kvalued closed classes without a finite basis]. Doklady Akademii Nauk SSSR 127(1), 44–46 (1959)
Jeavons, P., Cohen, D., Gyssens, M.: Closure properties of constraints. J. Assoc. Comput. Mach. 44(4), 527–548 (1997)
Juban, L.: Dichotomy theorem for the generalized unique satisfiability problem. In: Ciobanu, G., Păun, G. (eds.) Proceedings 12th Fundamentals of Computation Theory (FCT’99) Iaşi (Romania), Lecture Notes in Computer Science, vol. 1684, pp. 327–337. Springer, Berlin (1999)
Kann, V.: Polynomially bounded minimization problems that are hard to approximate. Nordic J. Comput. 1(3), 317–331 (1994)
Khanna, S., Sudan, M., Trevisan, L., Williamson, D.P.: The approximability of constraint satisfaction problems. SIAM J. Comput. 30(6), 1863–1920 (2000)
Lagerkvist, V.: Weak bases of Boolean coclones. Inf. Process. Lett. 114(9), 462–468 (2014)
Lau, D.: Function Algebras on Finite Sets: A Basic Course of ManyValued Logic and Clone Theory. Springer Monographs in Mathematics. Springer, Berlin (2006)
Pöschel, R., Kalužnin, L.A.: Funktionenund Relationenalgebren. Deutscher Verlag der Wissenschaften, Berlin (1979)
Romov, B.A.: The algebras of partial functions and their invariants. Cybernetics 17(2), 157–167 (1981). Translated from . Kibernetika 17(2), 1–11 (1981)
Schaefer, T.J.: The complexity of satisfiability problems. In: Proceedings 10th Symposium on Theory of Computing (STOC’78), San Diego (California, USA), pp. 216–226 (1978)
Schnoor, H., Schnoor, I.: Partial polymorphisms and constraint satisfaction problems. In: Creignou, N., Kolaitis, P.G., Vollmer, H. (eds.) Complexity of Constraints—An Overview of Current Research Themes [Result of a Dagstuhl Seminar], Lecture Notes in Computer Science, vol. 5250, pp. 229–254. Springer, Berlin (2008)
Schnoor, I.: The Weak Base Method for Constraint Satisfaction. Dissertation, Gottfried Wilhelm Leibniz Universität Hannover (2008)
Schrijver, A.: Theory of Linear and Integer Programming. Wiley, New York (1986)
Vardy, A.: The intractability of computing the minimum distance of a code. IEEE Trans. Inf. Theory IT43(6), 1757–1766 (1997)
Warshall, S.: A theorem on Boolean matrices. J. Assoc. Comput. Mach. 9(1), 11–12 (1962)
Acknowledgments
Miki Hermann was supported by ANR11ISO200301 Blanc International grant ALCOCLAN. Part of the work was done during his stay at the Wolfgang Pauli Institute (ICP, UMI CNRS 2842) in Vienna, Austria.
Research of Stefan Mengel was done during his postdoctoral stay in LIX at École Polytechnique. Supported by a QUALCOMM grant.
Gernot Salzer was supported by Austrian Science Fund (FWF) grant I836N23.
Funding
Open access funding provided by TU Wien (TUW).
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This paper is the final version of a trilogy of conference papers Give Me Another One! [6] (ISAAC 2015), As Close As It Gets [7] (WALCOM 2016), and The Next Whisky Bar [8] (CSR 2016).
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Behrisch, M., Hermann, M., Mengel, S. et al. Minimal Distance of Propositional Models. Theory Comput Syst 63, 1131–1184 (2019). https://doi.org/10.1007/s0022401898968
Published:
Issue Date:
DOI: https://doi.org/10.1007/s0022401898968
Keywords
 Constraint satisfaction problem
 Hamming distance
 Optimization problems
 Approximation.