Approximate Counting of Minimal Unsatisfiable Subsets

Given an unsatisfiable formula F in CNF, i.e. a set of clauses, the problem of Minimal Unsatisfiable Subset (MUS) seeks to identify a minimal subset of clauses \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$N \subseteq F$$\end{document} such that N is unsatisfiable. The emerging viewpoint of MUSes as the root causes of unsatisfiability has led MUSes to find applications in a wide variety of diagnostic approaches. Recent advances in identification and enumeration of MUSes have motivated researchers to discover applications that can benefit from rich information about the set of MUSes. One such extension is that of counting the number of MUSes. The current best approach for MUS counting is to employ a MUS enumeration algorithm, which often does not scale for the cases with a reasonably large number of MUSes. Motivated by the success of hashing-based techniques in the context of model counting, we design the first approximate MUS counting procedure with \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(\varepsilon ,\delta )$$\end{document} guarantees, called \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathsf {AMUSIC}$$\end{document}. Our approach avoids exhaustive MUS enumeration by combining the classical technique of universal hashing with advances in QBF solvers along with a novel usage of union and intersection of MUSes to achieve runtime efficiency. Our prototype implementation of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathsf {AMUSIC}$$\end{document} is shown to scale to instances that were clearly beyond the realm of enumeration-based approaches.


Introduction
Given an unsatisfiable Boolean formula F as a set of clauses {f 1 , f 2 , . . . f n }, also known as conjunctive normal form (CNF), a set N of clauses is a Minimal Unsatisfiable Subset (MUS) of F iff N ⊆ F , N is unsatisfiable, and for each f ∈ N the set N \ {f } is satisfiable. Since MUSes can be viewed as representing the minimal reasons for unsatisfiability of a formula, MUSes have found applications in wide variety of domains ranging from diagnosis [45], ontologies debugging [1], spreadsheet debugging [29], formal equivalence checking [20], constrained counting and sampling [28], and the like. As the scalable techniques for identification of MUSes appeared only about decade and half ago, the earliest applications primarily focused on a reduction to the identification of a single MUS or a small set of MUSes. With an improvement in the scalability of MUS identification techniques, researchers have now sought to investigate extensions of MUSes Work done in part while the first author visited National University of Singapore.
and their corresponding applications. One such extension is MUS counting, i.e., counting the number of MUSes of F . Hunter and Konieczny [26], Mu [45], and Thimm [56] have shown that the number of MUSes can be used to compute different inconsistency metrics for general propositional knowledge bases.
In contrast to the progress in the design of efficient MUS identification techniques, the work on MUS counting is still in its nascent stages. Reminiscent of the early days of model counting, the current approach for MUS counting is to employ a complete MUS enumeration algorithm, e.g., [3,12,34,55], to explicitly identify all MUSes. As noted in Sect. 2, there can be up to exponentially many MUSes of F w.r.t. |F |, and thus their complete enumeration can be practically intractable. Indeed, contemporary MUS enumeration algorithms often cannot complete the enumeration within a reasonable time [10,12,34,47]. In this context, one wonders: whether it is possible to design a scalable MUS counter without performing explicit enumeration of MUSes?
The primary contribution of this paper is a probabilistic counter, called AMUSIC, that takes in a formula F , tolerance parameter ε, confidence parameter δ, and returns an estimate guaranteed to be within (1 + ε)-multiplicative factor of the exact count with confidence at least 1 − δ. Crucially, for F defined over n clauses, AMUSIC explicitly identifies only O(log n · log(1/δ) · (ε) −2 ) many MUSes even though the number of MUSes can be exponential in n.
The design of AMUSIC is inspired by recent successes in the design of efficient XOR hashing-based techniques [15,17] for the problem of model counting, i.e., given a Boolean formula G, compute the number of models (also known as solutions) of G. We observe that both the problems are defined over a power-set structure. In MUS counting, the goal is to count MUSes in the power-set of F , whereas in model counting, the goal is to count models in the power-set that represents all valuations of variables of G. Chakraborty et al. [18,52] proposed an algorithm, called ApproxMC, for approximate model counting that also provides the ( , δ) guarantees. ApproxMC is currently in its third version, ApproxMC3 [52]. The base idea of ApproxMC3 is to partition the power-set into nCells small cells, then pick one of the cells, and count the number inCell of models in the cell. The total model count is then estimated as nCells × inCell . Our algorithm for MUS counting is based on ApproxMC3. We adopt the high-level idea to partition the power-set of F into small cells and then estimate the total MUS count based on a MUS count in a single cell. The difference between ApproxMC3 and AMUSIC lies in the way of counting the target elements (models vs. MUSes) in a single cell; we propose novel MUS specific techniques to deal with this task. In particular, our contribution is the following: -We introduce a QBF (quantified Boolean formula) encoding for the problem of counting MUSes in a single cell and use a Σ P 3 oracle to solve it. -Let UMU F and IMU F be the union and the intersection of all MUSes of F , respectively. We observe that every MUS of F (1) contains IMU F and (2) is contained in UMU F . Consequently, if we determine the sets UMU F and IMU F , then we can significantly speed up the identification of MUSes in a cell.
-We propose a novel approaches for computing the union UMU F and the intersection IMU F of all MUSes of F . -We implement AMUSIC and conduct an extensive empirical evaluation on a set of scalable benchmarks. We observe that AMUSIC is able to compute estimates for problems clearly beyond the reach of existing enumeration-based techniques. We experimentally evaluate the accuracy of AMUSIC. In particular, we observe that the estimates computed by AMUSIC are significantly closer to true count than the theoretical guarantees provided by AMUSIC.
Our work opens up several new interesting avenues of research. From a theoretical perspective, we make polynomially many calls to a Σ P 3 oracle while the problem of finding a MUS is known to be in F P NP , i.e. a MUS can be found in polynomial time by executing a polynomial number of calls to an NPoracle [19,39]. Contrasting this to model counting techniques, where approximate counter makes polynomially many calls to an NP-oracle when the underlying problem of finding satisfying assignment is NP-complete, a natural question is to close the gap and seek to design a MUS counting algorithm with polynomially many invocations of an F P NP oracle. From a practitioner perspective, our work calls for a design of MUS techniques with native support for XORs; the pursuit of native support for XOR in the context of SAT solvers have led to an exciting line of work over the past decade [52,53].

Preliminaries and Problem Formulation
A Boolean formula F = {f 1 , f 2 , . . . , f n } in a conjunctive normal form (CNF) is a set of Boolean clauses over a set of Boolean variables Vars(F ). A Boolean clause is a set {l 1 , l 2 , . . . , l k } of literals. A literal is either a variable x ∈ Vars(F ) or its negation ¬x. A truth assignment I to the variables Vars(F ) is a mapping Vars(F ) → {1, 0}. A clause f ∈ F is satisfied by an assignment I iff I(l) = 1 for some l ∈ f or I(k) = 0 for some ¬k ∈ f . The formula F is satisfied by I iff I satisfies every f ∈ F ; in such a case I is called a model of F . Finally, F is satisfiable if it has a model; otherwise F is unsatisfiable.
A QBF is a Boolean formula where each variable is either universally (∀) or existentially (∃) quantified. We write Q 1 · · · Q k -QBF, where Q 1 , . . . Q k ∈ {∀, ∃}, to denote the class of QBF with a particular type of alternation of the quantifiers, e.g., ∃∀-QBF or ∃∀∃-QBF. Every QBF is either true (valid) or false (invalid). The problem of deciding validity of a formula in [43]. When it is clear from the context, we write just formula to denote either a QBF or a Boolean formula in CNF. Moreover, throughout the whole text, we use F to denote the input Boolean Formula in CNF. Furthermore, we will use capital letters, e.g., S, K, N , to denote other CNF formulas, small letters, e.g., f, f 1 , f i , to denote clauses, and small letters, e.g., x, x , y, to denote variables.
Given a set X, we write P(X) to denote the power-set of X, and |X| to denote the cardinality of X. Finally, we write Pr [O : P] to denote the probability of an outcome O when sampling from a probability space P. When P is clear from the context, we write just Pr [O].

Minimal Unsatisfiability
Note that the minimality concept used here is set minimality, not minimum cardinality. Therefore, there can be MUSes with different cardinalities. In general, there can be up to exponentially many MUSes of F w.r.t. |F | (see the Sperner's theorem [54]). We use AMU F to denote the set of all MUSes of F . Furthermore, we write UMU F and IMU F to denote the union and the intersection of all MUSes of F, respectively. Finally, note that every subset S of F can be expressed as a bit-vector over the alphabet {0, 1}; for example, if F = {f 1 , f 2 , f 3 , f 4 } and S = {f 1 , f 4 }, then the bit-vector representation of S is 1001.
The necessary clauses are sometimes also called transition [6] or critical [2] clauses. Note that a set N is a MUS iff every f ∈ N is necessary for N . Also, note that a clause f ∈ F is necessary for F iff f ∈ IMU F . Example 1. We demonstrate the concepts on an example, illustrated in Fig. 1.

Hash Functions
Let n and m be positive integers such that m < n. By  To choose a hash function uniformly at random from H xor (n, m), we randomly and independently choose the values of a i,k . It has been shown [24] that the family H xor (n, m) is pairwise independent, also known as strongly 2universal. In particular, let us by h ← H xor (n, m) denote the probability space obtained by choosing a hash function h uniformly at random from H xor (n, m). The property of pairwise independence guarantees that for all α 1 , α 2 ∈ {1, 0} m and for all distinct y 1 , We say that a hash function h ∈ H xor (n, m) partitions {0, 1} n into 2 m cells. Furthermore, given a hash function h ∈ H xor (n, m) and a cell α ∈ {1, 0} m of h, we define their prefix-slices. In particular, for every k ∈ {1, . . . , m}, the k th prefix Intuitively, a cell α (k) of h (k) originates by merging the two cells of h (k+1) that differ only in the last bit.
In our work, we use hash functions from the family H xor (n, m) to partition the power-set P(F ) of the given Boolean formula F into 2 m cells.

Problem Definitions
In this paper, we are concerned with the following problems. Name: ( , δ)-#MUS problem Input: A formula F , a tolerance > 0, and a confidence 1 − δ ∈ (0, 1]. Output: A number c such that The main goal of this paper is to provide a solution to the ( , δ)-#MUS problem. We also deal with the MUS-membership, MUS-union and MUS-intersection problems since these problems emerge in our approach for solving the ( , δ)-#MUS problem. Finally, we do not focus on solving the ( , δ)-#SAT problem, however the problem is closely related to the ( , δ)-#MUS problem.

Related Work
It is well-known (see e.g., [21,36,51]) that a clause f ∈ F belongs to IMU F iff f is necessary for F . Therefore, to compute IMU F , one can simply check each f ∈ F for being necessary for F . We are not aware of any work that has focused on the MUS-intersection problem in more detail.
The MUS-union problem was recently investigated by Mencia et al. [42]. Their algorithm is based on gradually refining an under -approximation of UMU F until the exact UMU F is computed. Unfortunately, the authors experimentally show that their algorithm often fails to find the exact UMU F within a reasonable time even for relatively small input instances (only an under-approximation is computed). In our work, we propose an approach that works in the other way: we start with an over-approximation of UMU F and gradually refine the approximation to eventually get UMU F . Another related research was conducted by Janota and Marques-Silva [30] who proposed several QBF encodings for solving the MUS-membership problem. Although they did not focus on finding UMU F , one can clearly identify UMU F by solving the MUS-membership problem for each f ∈ F .
As for counting the number of MUSes of F , we are not aware of any previous work dedicated to this problem. Yet, there have been proposed plenty of algorithms and tools (e.g., [3,9,11,12,35,47]) for enumerating/identifying all MUSes of F . Clearly, if we enumerate all MUSes of F , then we obtain the exact value of |AMU F |, and thus we also solve the ( , δ)-#MUS problem. However, since there can be up to exponentially many of MUSes w.r.t. |F |, MUS enumeration algorithms are often not able to complete the enumeration in a reasonable time and thus are not able to find the value of |AMU F |.
Very similar to the ( , δ)-#MUS problem is the ( , δ)-#SAT problem. Both problems involve the same probabilistic and approximation guarantees. Moreover, both problems are defined over a power-set structure. In MUS counting, the goal is to count MUSes in P(F ), whereas in model counting, the goal is to count models in P(Vars(F )). In this paper, we propose an algorithm for solving the ( , δ)-#MUS problem that is based on ApproxMC3 [15,17,52]. In particular, we keep the high-level idea of ApproxMC3 for processing/exploring the power-set structure, and we propose new low-level techniques that are specific for MUS counting.

AMUSIC: A Hashing-Based MUS Counter
We now describe AMUSIC, a hashing-based algorithm designed to solve the (ε, δ)-#MUS problem. The name of the algorithm is an acronym for Approximate Minimal Unsatisfiable Subsets Implicit Counter. AMUSIC is based on ApproxMC3, which is a hashing-based algorithm to solve (ε, δ)-#SAT problem. As such, while the high-level structure of AMUSIC and ApproxMC3 share close similarities, the two algorithms differ significantly in the design of core technical subroutines.
We first discuss the high-level structure of AMUSIC in Sect. 4.1. We then present the key technical contributions of this paper: the design of core subroutines of AMUSIC in Sects. 4.3, 4.4 and 4.5.

Algorithmic Overview
The main procedure of AMUSIC is presented in Algorithm 1. The algorithm takes as an input a Boolean formula F in CNF, a tolerance (> 0), and a confidence Algorithm 1: AMUSIC(F, , δ) (nCells, nSols) ← AMUSICCore(G, IG, threshold, nCells) 10 if nCells = null then AddToList(C, nCells × nSols) 11 return FindMedian(C ) parameter δ ∈ (0, 1], and returns an estimate of |AMU F | within tolerance and with confidence at least 1 − δ. Similar to ApproxMC3, we first check whether |AMU F | is smaller than a specific threshold that is a function of ε. This check is carried out via a MUS enumeration algorithm, denoted FindMUSes, that returns a set Y of MUSes of F such that |Y | = min(threshold, |AMU F |). If |Y | < threshold, the algorithm terminates while identifying the exact value of |AMU F |. In a significant departure from ApproxMC3, AMUSIC subsequently computes the union (UMU F ) and the intersection (IMU F ) of all MUSes of F by invoking the subroutines GetUMU and GetIMU, respectively. Through the lens of set representation of the CNF formulas, we can view UMU F as another CNF formula, G. Our key observation is that AMU F = AMU G (see Sect. 4.2), thus instead of working with the whole F , we can focus only on G. The rest of the main procedure is similar to ApproxMC3, i.e., we repeatedly invoke the core subroutine called AMUSICCore. The subroutine attempts to find an estimate c of |AMU G | within the tolerance . Briefly, to find the estimate, the subroutine partitions P(G) into nCells cells, then picks one of the cells, and counts the number nSols of MUSes in the cell. The pair (nCells, nSols) is returned by AMUSICCore, and the estimate c of |AMU G | is then computed as nSols × nCells. There is a small chance that AMUSICCore fails to find the estimate; it such a case nCells = nSols = null. Individual estimates are stored in a list C. After the final invocation of AMUSICCore, AMUSIC computes the median of the list C and returns the median as the final estimate of |AMU G |. The total number of invocations of AMUSICCore is in O(log(1/δ)) which is enough to ensure the required confidence 1 − δ (details on assurance of the ( , δ) guarantees are provided in Sect. 4.2). We now turn to AMUSICCore which is described in Algorithm 2. The partition of P(G) into nCells cells is made via a hash function h from H xor (|G|, m), i.e. nCells = 2 m . The choice of m is a crucial part of the algorithm as it regulates the size of the cells. Intuitively, it is easier to identify all MUSes of a small cell; however, on the contrary, the use of small cells does not allow to achieve a reasonable tolerance. Based on ApproxMC3, we choose m such that a cell given by a hash function h ∈ H xor (|G|, m) contains almost threshold many MUSes. In Algorithm 2: AMUSICCore(G, I G , threshold, prevNCells) 1 Choose h at random from Hxor (|G|, |G| − 1) 2 Choose α at random from {0, 1} |G|−1 3 nSols ← CountInCell(G, IG, h, α, threshold) 4 if nSols = threshold then return (null, null) 5 mPrev ← log 2 prevNCells 6 (nCells, nSols) ← LogMUSSearch(G, IG, h, α, threshold, mPrev ) 7 return (nCells, nSols ) particular, the computation of AMUSICCore starts by choosing at random a hash function h from H xor (|G|, |G| − 1) and a cell α at random from {0, 1} |G|−1 . Subsequently, the algorithm tends to identify m th prefixes h (m) and α (m) of h and α, respectively, such that . We also know that the cell α (0) , i.e. the whole P(G), contains at least threshold MUSes (see Algorithm The check is carried via a procedure CountInCell that returns the number nSols = min(|AMU G,h (|G|−1) ,α (|G|−1) |, threshold). If nSols = threshold, then AMUSICCore fails to find the estimate of |AMU G | and terminates. Otherwise, a procedure LogMUSSearch is used to find the required value of m together with the number nSols of MUSes in α (m) . The implementation of LogMUSSearch is directly adopted from ApproxMC3 and thus we do not provide its pseudocode here (note that in ApproxMC3 the procedure is called LogSATSearch). We only briefly summarize two main ingredients of the procedure. First, it has been observed that the required value of m is often similar for repeated calls of AMUSICCore. Therefore, the algorithm keeps the value mPrev of m from previous iteration and first test values near mPrev. If none of the near values is the required one, the algorithm exploits that AMU G,h (1) ,α (1) ⊇ · · · ⊇ AMU G,h (|G|−1) ,α (|G|−1) , which allows it to find the required value of m via the galloping search (variation of binary search) while performing only log |G| calls of CountInCell.
Note that in ApproxMC3, the procedure CountInCell is called BSAT and it is implemented via an NP oracle, whereas we use a Σ P 3 oracle to implement the procedure (see Sect. 4.3). The high-level functionality is the same: the procedures use up to threshold calls of the oracle to check whether the number of the target elements (models vs. MUSes) in a cell is lower than threshold.

Analysis and Comparison with ApproxMC3
Following from the discussion above, there are three crucial technical differences between AMUSIC and ApproxMC3: (1) the implementation of the subroutine CountInCell in the context of MUS, (2) computation of the intersection IMU F of all MUSes of F and its usage in CountInCell, and (3) computation of the union UMU F of all MUSes of F and invocation of the underlying subroutines with G (i.e., UMU F ) instead of F . The usage of CountInCell can be viewed as domain-specific instantiation of BSAT in the context of MUSes. Furthermore, we use the computed intersection of MUSes to improve the runtime efficiency of CountInCell. It is perhaps worth mentioning that prior studies have observed that over 99% of the runtime of ApproxMC3 is spent inside the subroutine BSAT [52]. Therefore, the runtime efficiency of CountInCell is crucial for the runtime performance of AMUSIC, and we discuss in detail, in Sect. 4.3, algorithmic contributions in the context of CountInCell including usage of IMU F . We now argue that the replacement of F with G in line 4 in Algorithm 1 does not affect correctness guarantees, which is stated formally below: For every G such that UMU F ⊆ G ⊆ F , the following hold: Proof.
(1) Since G ⊆ F then every MUS of G is also a MUS of F . In the other direction, every MUS of F is contained in the union UMU F of all MUSes of F , and thus every MUS of F is also a MUS of G (⊇ UMU F ). ( Equipped with Lemma 1, we now argue that each run of AMUSIC can be simulated by a run of ApproxMC3 for an appropriately chosen formula. Given an unsatisfiable formula F = {f 1 , . . . , f |F | }, let us by B F denote a satisfiable formula such that: (1) Vars(B F ) = {x 1 , . . . , x |F | } and (2) an assignment I : Informally, models of B F one-to-one map to MUSes of F . Hence, the size of sets returned by CountInCell for F is identical to the corresponding BSAT for B F . Since the analysis of ApproxMC3 only depends on the correctness of the size of the set returned by BSAT, we conclude that the answer computed by AMUSIC would satisfy (ε, δ) guarantees. Furthermore, observing that CountInCell makes threshold many queries to Σ P 3 -oracle, we can bound the time complexity. Formally, Few words are in order concerning the complexity of AMUSIC. As noted in Sect. 1, for a formula on n variables, approximate model counters make O(log n · 1 ε 2 ·log(1/δ)) calls to an NP oracle, whereas the complexity of finding a satisfying assignment is NP-complete. In our case, we make calls to a Σ P 3 oracle while the problem of finding a MUS is in FP NP . Therefore, a natural direction of future work is to investigate the design of a hashing-based technique that employs an FP NP oracle.

Counting MUSes in a Cell: CountInCell
In this section, we describe the procedure CountInCell. The input of the procedure is the formula G (i.e., UMU F ), the set I G = IMU G , a hash function h ∈ H xor (|G|, m), a cell α ∈ {0, 1} m , and the threshold value. The output is c = min(threshold, |AMU G,h,α |).
The description is provided in Algorithm 3. The algorithm iteratively calls a procedure GetMUS that returns either a MUS M such that M ∈ (AMU G,h,α \M) or null if there is no such MUS. For each M , the value of c is increased and M is added to M. The loop terminates either when c reaches the value of threshold or when GetMUS fails to find a new MUS (i.e., returns null). Finally, the algorithm returns c.

GetMUS.
To implement the procedure GetMUS, we build an ∃∀∃-QBF formula MUSInCell such that each witness of the formula corresponds to a MUS from AMU G,h,α \ M. The formula consists of several parts and uses several sets of variables that are described in the following.
The main part of the formula, shown in Eq. (3), introduces the first existential quantifier and a set P = {p 1 , . . . , p |G| } of variables that are quantified by the quantifier. Note that each valuation I of P corresponds to a subset S of G; in particular let us by I P,G denote the set {f i ∈ G | I(p i ) = 1}. The formula is build in such a way that a valuation I is a witness of the formula if and only if I P,G is a MUS from AMU G,h,α \ M. This property is expressed via three conjuncts, denoted inCell(P ), unexplored(P ), and isMUS(P ), encoding that (i) I P,G is in the cell α, (ii) I P,G is not in M, and (iii) I P,G is a MUS, respectively.
Recall that the family H xor (n, m) of hash functions is defined as To encode that we are not interested in MUSes from M, we can simply block all the valuations of P that correspond to these MUSes. However, we can do better. In particular, recall that if M is a MUS, then no proper subset and no proper superset of M can be a MUS; thus, we prune away all these sets from the search space. The corresponding formula is shown in Eq. (5).
The formula isMUS(P ) encoding that I P,G is a MUS is shown in Eq. (6). Recall that I P,G is a MUS if and only if I P,G is unsatisfiable and for every closest subset S of I P,G it holds that S is satisfiable, where closest subset means that |I P,G \ S| = 1. We encode these two conditions using two subformulas denoted by unsat(P ) and noUnsatSubset(P ).

isMUS(P ) = unsat(P ) ∧ noUnsatSubset(P )
The formula unsat(P ), shown in Eq. (7), introduces the set Vars(G) of variables that appear in G and states that every valuation of Vars(G) falsifies at least one clause contained in I P,G .
The requirement that I Q,G is satisfiable is encoded in Eq. (9). Since we are already reasoning about the satisfiability of G's clauses in Eq. (7), we introduce here a copy G of G where each variable x i of G is substituted by its primed copy x i . Equation (9) states that there exists a valuation of Vars(G ) that satisfies Equation (10) encodes that I Q,G is a closest subset of I P,G . To ensure that I Q,G is a subset of I P,G , we add the clauses q i → p i . To ensure the closeness, we use cardinality constraints. In particular, we introduce another set R = {r 1 , . . . , r |G| } of variables and enforce their values via r i ↔ (p i ∧ ¬q i ). Intuitively, the number of variables from R that are set to 1 equals to |I P,G \ I Q,G |. Finally, we add cardinality constraints, denoted by exactlyOne(R), ensuring that exactly one r i is set to 1. (10) Note that instead of encoding a closest subset in Eq. 10, we could just encode that I Q,G is an arbitrary proper subset of I P,G as it would still preserve the meaning of Eq. 6 that I P,G is a MUS. Such an encoding would not require introducing the set R of variables and also, at the first glance, would save a use of one existential quantifier. The thing is that the whole formula would still be in the form of ∃∀∃-QBF due to Eq. 9 (which introduces the second existential quantifier). The advantage of using a closet subset is that we significantly prune the search space of the QBF solver. It is thus matter of contemporary QBF solvers whether it is more beneficial to reduce the number of variables (by removing R) or to prune the searchspace via R.
For the sake of lucidity, we have not exploited the knowledge of IMU G (I G ) while presenting the above equations. Since we know that every clause f ∈ IMU G has to be contained in every MUS of G, we can fix the values of the variables {p i | f i ∈ IMU G } to 1. This, in turn, significantly simplifies the equations and prunes away exponentially many (w.r.t. |IMU G |) valuations of P , Q, and R, that need to be assumed. To solve the final formula, we employ a ∃∀∃-QBF solver, i.e., a Σ P 3 oracle. Finally, one might wonder why we use our custom solution for identifying MUSes in a cell instead of employing one of existing MUS extraction techniques. Conventional MUS extraction algorithms cannot be used to identify MUSes that are in a cell since the cell is not "continuous" w.r.t. the set containment. In particular, assume that we have three sets of clauses, K, L, M , such that K ⊂ L ⊂ M . It can be the case that K and M are in the cell, but L is not in the cell. Contemporary MUS extraction techniques require the search space to be continuous w.r.t. the set containment and thus cannot be used in our case.

Computing UMU F
We now turn our attention to computing the union UMU F (i.e., G) of all MUSes of F . Let us start by describing well-known concepts of autark variables and a lean kernel. A set A ⊆ Vars(F ) of variables is an autark of F iff there exists a truth assignment to A such that every clause of F that contains a variable from A is satisfied by the assignment [44]. It holds that the union of two autark sets is also an autark set, thus there exists a unique largest autark set (see, e.g., [31,32]). The lean kernel of F is the set of all clauses that do not contain any variable from the largest autark set. It is known that the lean kernel of F is an over-approximation of UMU F (see e.g., [31,32]), and there were proposed several algorithms, e.g., [33,38], for computing the lean kernel.
Algorithm. Our approach for computing UMU F consists of two parts. First, we compute the lean kernel K of F to get an over-approximation of UMU F , and Algorithm 4: getUMU(F ) then we gradually refine the over-approximation K until K is exactly the set UMU F . The refinement is done by solving the MUS-membership problem for each f ∈ K. To solve the MUS-membership problem efficiently, we reveal a connection to necessary clauses, as stated in the following lemma.
If W is a subset of F and f ∈ W a necessary clause for W then f has to be contained in every MUS of W . Moreover, W has at least one MUS and since W ⊆ F , then every MUS of W is also a MUS of F .
Our approach for computing UMU F is shown in Algorithm 4. It takes as an input the formula F and outputs UMU F (denoted K). Moreover, the algorithm maintains a set M of MUSes of F . Initially, M = ∅ and K is set to the lean kernel of F ; we use an approach by Marques-Silva et al. [38] to compute the lean kernel. At this point, we know that K ⊇ UMU F ⊇ {f ∈ M | M ∈ M}. To find UMU F , the algorithm iteratively determines for each f ∈ K \ {f ∈ M | M ∈ M} if f ∈ UMU F . In particular, for each f , the algorithm checks whether there exists a subset W of K such that f is necessary for W (Lemma 2). The task of finding W is carried out by a procedure checkNecessity(f, K). If there is no such W , then the algorithm removes f from K. In the other case, if W exists, the algorithm finds a MUS of W and adds the MUS to the set M. Any available single MUS extraction approach, e.g., [2,5,7,46], can be used to find the MUS.
To implement the procedure checkNecessity(f, K) we build a QBF formula that is true iff there exists a set W ⊆ K such that W is unsatisfiable and f is necessary for W . To represent W we introduce a set S = {s g | g ∈ K} of Boolean variables; each valuation I of S corresponds to a subset I S,K of K defined as I S,K = {g ∈ K | I(s g ) = 1}. Our encoding is shown in Eq. 11.

∃S, Vars(K). ∀Vars(K
The formula consists of three main conjuncts. The first conjunct ensures that f is present in I S,K . The second conjunct states that I S,K \ {f } is satisfiable, i.e., that there exists a valuation of Vars(K) that satisfies I S,K \ {f }. Finally, the last conjunct express that I S,K is unsatisfiable, i.e., that every valuation of Vars(K) falsifies at least one clause of I S,K . Since we are already reasoning about variables of K in the second conjunct, in the third conjunct, we use a primed version (a copy) K of K.
Alternative QBF Encodings. Janota and Marques-Silva [30] proposed three other QBF encodings for the MUS-membership problem, i.e., for deciding whether a given f ∈ F belongs to UMU F . Two of the three proposed encodings are typically inefficient; thus, we focus on the third encoding, which is the most concise among the three. The encoding, referred to as JM encoding (after the initials of the authors), uses only two quantifiers in the form of ∃∀-QBF and it is only linear in size w.r.t. |F |. The underlying ideas by JM encoding and our encoding differ significantly. Our encoding is based on necessary clauses (Lemma 2), whereas JM exploits a connection to so-called Maximal Satisfiable Subsets. Both the encodings use the same quantifiers; however, our encoding is smaller. In particular, the JM uses 2 × (Vars(F ) + |F |) variables whereas our encoding uses only |F | + 2 × Vars(F ) variables, and leads to smaller formulas.

Implementation.
Recall that we compute UMU F to reduce the search space, i.e. instead of working with the whole F , we work only with G = UMU F . The soundness of this reduction is witnessed in Lemma 1 (Sect. 4.2). In fact, Lemma 1 shows that it is sound to reduce the search space to any G such that UMU F ⊆ G ⊆ F . Since our algorithm for computing UMU F subsumes repeatedly solving a Σ P 2 -complete problem, it can be very time-consuming. Therefore, instead of computing the exact UMU F , we optionally compute only an over-approximation G of UMU F . In particular, we set a (user-defined) time limit for computing the lean kernel K of F . Moreover, we use a time limit for executing the procedure checkNecessity(f, K); if the time limit is exceeded for a clause f ∈ K, we conservatively assume that f ∈ UMU F , i.e., we over-approximate.
Sparse Hashing and UMU F . The approach of computation of UMU F is similar to, in spirit, computation of independent support of a formula to design sparse hash functions [16,28]. Briefly, given a Boolean formula H, an independent support of H is a set I ⊆ Vars(H) such that in every model of H, the truth assignment to I uniquely determines the truth assignment to Vars(H) \ I. Practically, independent support can be used to reduce the search space where a model counting algorithm searches for models of H. It is interesting to note that the state of the art technique reduces the computation of independent support of a formula in the context of model counting to that of computing (Group) Minimal Unsatisfiable Subset (GMUS). Thus, a formal study of computation of independent support in the context of MUSes is an interesting direction of future work.

Computing IMU G
Our approach to compute the intersection IMU G (i.e., I G ) of all MUSes of G is composed of several ingredients. First, recall that a clause f ∈ G belongs to IMU G iff f is necessary for G. Another ingredient is the ability of contemporary SAT solvers to provide either a model or an unsat core of a given unsatisfiable formula N ⊆ G, i.e., a small, yet not necessarily minimal, unsatisfiable subset of N . The final ingredient is a technique called model rotation. The technique was originally proposed by Marques-Silva and Lynce [40], and it serves to explore necessary clauses based on other already known necessary clauses. In particular, let f be a necessary clause for G and I : Vars(G) → {0, 1} a model of G \ {f }. Since G is unsatisfiable, the model I does not satisfy f . The model rotation attempts to alter I by switching, one by one, the Boolean assignment to the variables Vars({f }). Each variable assignment I that originates from such an alternation of I necessarily satisfies f and does not satisfy at least one f ∈ G. If it is the case that there is exactly one such f , then f is necessary for G. An improved version of model rotation, called recursive model rotation, was later proposed by Belov and Marques-Silva [6] who noted that the model rotation could be recursively performed on the newly identified necessary clauses.
Our approach for computing IMU G is shown in Algorithm 5. To find IMU G , the algorithm decides for each f whether f is necessary for G. In particular, the algorithm maintains two sets: a set C of candidates on necessary clauses and a set K of already known necessary clauses. Initially, K is empty and C = G. At the end of computation, C is empty and K equals to IMU G . The algorithm works iteratively. In each iteration, the algorithm picks a clause f ∈ C and checks G \ {f } for satisfiability via a procedure checkSAT. Moreover, checkSAT returns either a model I or an unsat core core of G \ {f }. If G \ {f } is satisfiable, i.e. f is necessary for G, the algorithm employs the recursive model rotation, denoted by RMR(G, f, I), to identify a set R of additional necessary clauses. Subsequently, all the newly identified necessary clauses are added to K and removed from C.
In the other case, when G \ {f } is unsatisfiable, the set C is reduced to C ∩ core since every necessary clause of G has to be contained in every unsatisfiable subset of G. Note that f ∈ core, thus at least one clause is removed from C.

Experimental Evaluation
We employed several external tools to implement AMUSIC. In particular, we use the QBF solver CAQE [49] for solving the QBF formula MUSInCell, the 2QBF solver CADET [50] for solving our ∃∀-QBF encoding while computing UMU F , and the QBF preprocessor QRATPre+ [37] for preprocessing/simplifying our QBF encodings. Moreover, we employ muser2 [7] for a single MUS extraction while computing UMU F , a MaxSAT solver UWrMaxSat [48] to implement the algorithm by Marques-Silva et al. [38] for computing the lean kernel of F , and finally, we use a toolkit called pysat [27] for encoding cardinality constraints used in the formula MUSInCell. The tool along with all benchmarks that we used is available at https://github.com/jar-ben/amusic.

Objectives.
As noted earlier, AMUSIC is the first technique to (approximately) count MUSes without explicit enumeration. We demonstrate the efficacy of our approach via a comparison with two state of the art techniques for MUS enumeration: MARCO [35] and MCSMUS [3]. Within a given time limit, a MUS enumeration algorithm either identifies the whole AMU F , i.e., provides the exact value of |AMU F |, or identifies just a subset of AMU F , i.e., provides an under-approximation of |AMU F | with no approximation guarantees.
The objective of our empirical evaluation was two-fold: First, we experimentally examine the scalability of AMUSIC, MARCO, and MCSMUS w.r.t. |AMU F |. Second, we examine the empirical accuracy of AMUSIC.
Benchmarks and Experimental Setup. Given the lack of dedicated counting techniques, there is no sufficiently large set of publicly available benchmarks to perform critical analysis of counting techniques. To this end, we focused on a recently emerging theme of evaluation of SAT-related techniques on scalable benchmarks 1 . In keeping with prior studies employing empirical methodology based on scalable benchmarks [22,41], we generated a custom collection of CNF benchmarks. The benchmarks mimic requirements on multiprocessing systems. Assume that we are given a system with two groups (kinds) of processes, A = {a 1 , . . . , a |A| } and B = {b 1 , . . . , b |B| }, such that |A| ≥ |B|. The processes require resources of the system; however, the resources are limited. Therefore, there are restrictions on which processes can be active simultaneously. In particular, we have the following three types of mutually independent restrictions on the system: -The first type of restriction states that "at most k − 1 processes from the group A can be active simultaneously", where k ≤ |A|. -The second type of restriction enforces that "if no process from B is active then at most k − 1 processes from A can be active, and if at least one process from B is active then at most l − 1 processes from A can be active", where k, l ≤ |A|. -The third type of restriction includes the second restriction. Moreover, we assume that a process from B can activate a process from A. In particular, for every b i ∈ B, we assume that when b i is active, then a i is also active.
We encode the three restrictions via three Boolean CNF formulas, R 1 , R 2 , R 3 . The formulas use three sets of variables: X = {x 1 , . . . , x |A| }, Y = {y 1 , . . . , y |B| }, and Z. The sets X and Y represent the Boolean information about activity of processes from A and B: a i is active iff x i = 1 and b j is active iff y j = 1. The set Z contains additional auxiliary variables. Moreover, we introduce a formula ACT = ( xi∈X x i ) ∧ ( yi∈Y y i ) encoding that all processes are active. For each i ∈ {1, 2, 3}, the conjunction G i = R i ∧ ACT is unsatisfiable. Intuitively, every MUS of G i represents a minimal subset of processes that need to be active to violate the restriction. The number of MUSes in G 1 , , respectively. We generated G 1 , G 2 , and G 3 for these values: 10 ≤ |A| ≤ 30, 2 ≤ |B| ≤ 6, |A| 2 ≤ k ≤ 3×|A| 2 , and l = k − 1. In total, we obtained 1353 benchmarks (formulas) that range in their size from 78 to 361 clauses, use from 40 to 152 variables, and contain from 120 to 1.7 × 10 9 MUSes.
All experiments were run using a time limit of 7200 s and computed on an AMD EPYC 7371 16-Core Processor, 1 TB memory machine running Debian Linux 4.19.67-2. The values of and δ were set to 0.8 and 0.2, respectively. Accuracy. Recall that to compute an estimate c of |AMU F |, AMUSIC performs multiple iteration of executing AMUSICCore to get a list C of multiple estimates of |AMU F |, and then use the median of C as the final estimate c. The more iterations are performed, the higher is the confidence that c is within the required tolerance = 0.8, i.e., that |AMUF | 1.8 ≤ c ≤ 1.8 · |AMU F |. To achieve the confidence 1 − δ = 0.8, 66 iterations need to be performed. In case of 157 benchmarks, the algorithm was not able to finish even a single iteration, and only in case of 251 benchmarks, the algorithm finished all the 66 iterations. For the remaining 945 benchmarks, at least some iterations were finished, and thus at least an estimate with a lower confidence was determined.
We illustrate the achieved results in Fig. 3. The figure consists of two plots. The plot at the bottom of the figure shows the number of finished iterations (yaxis) for individual benchmarks (x-axis). The plot at the top of the figure shows how accurate were the MUS count estimates. In particular, for each benchmark (formula) F , we show the number c |AMUF | where c is the final estimate (median of estimates from finished iterations). For benchmarks where all iterations were completed, it was always the case that the final estimate is within the required tolerance, although we had only 0.8 theoretical confidence that it would be the case. Moreover, the achieved estimate never exceeded a tolerance of 0.1, which is much better than the required tolerance of 0.8. As for the benchmarks where only some iterations were completed, there is only a single benchmark where the tolerance of 0.8 was exceeded.
Scalability. The scalability of AMUSIC, MARCO, and MCSMUS w.r.t. the number of MUSes (|AMU F |) is illustrated in Fig. 4. In particular, for each benchmark (x-axis), we show in the plot the estimate of the MUS count that was achieved by the algorithms (y-axis). The benchmarks are sorted by the exact count of MUSes in the benchmarks. MARCO and MCSMUS were able to finish the MUS enumeration, and thus to provide the count, only for benchmarks that contained at most 10 6 and 10 5 MUSes, respectively. AMUSIC, on the other hand, was able to provide estimates on the MUS count even for benchmarks that contained up to 10 9 MUSes. Moreover, as we have seen in Fig. 3, the estimates are very accurate. Only in the case of 157 benchmarks where AMUSIC finished no iteration, it could not provide any estimate.

Summary and Future Work
We presented a probabilistic algorithm, called AMUSIC, for approximate MUS counting that needs to explicitly identify only logarithmically many MUSes and yet still provides strong theoretical guarantees. The high-level idea is adopted from a model counting algorithm ApproxMC3: we partition the search space into small cells, then count MUSes in a single cell, and estimate the total count by scaling the count from the cell. The novelty lies in the low-level algorithmic parts that are specific for MUSes. Mainly, (1) we propose QBF encoding for counting MUSes in a cell, (2) we exploit MUS intersection to speed-up localization of MUSes, and (3) we utilize MUS union to reduce the search space significantly. Our experimental evaluation showed that the scalability of AMUSIC outperforms the scalability of contemporary enumeration-based counters by several orders of magnitude. Moreover, the practical accuracy of AMUSIC is significantly better than what is guaranteed by the theoretical guarantees.
Our work opens up several questions at the intersection of theory and practice. From a theoretical perspective, the natural question is to ask if we can design a scalable algorithm that makes polynomially many calls to an NP oracle. From a practical perspective, our work showcases interesting applications of QBF solvers with native XOR support. Since approximate counting and sampling are known to be inter-reducible, another line of work would be to investigate the development of an almost-uniform sampler for MUSes, which can potentially benefit from the framework proposed in UniGen [14,16]. Another line of work is to extend our MUS counting approach to other constraint domains where MUSes find an application, e.g., F can be a set of SMT [25] or LTL [4,8] formulas or a set of transition predicates [13,23].
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.