On the (Complete) Reasons Behind Decisions

Recent work has shown that the input-output behavior of some common machine learning classifiers can be captured in symbolic form, allowing one to reason about the behavior of these classifiers using symbolic techniques. This includes explaining decisions, measuring robustness, and proving formal properties of machine learning classifiers by reasoning about the corresponding symbolic classifiers. In this work, we present a theory for unveiling the reasons behind the decisions made by Boolean classifiers and study some of its theoretical and practical implications. At the core of our theory is the notion of a complete reason, which can be viewed as a necessary and sufficient condition for why a decision was made. We show how the complete reason can be used for computing notions such as sufficient reasons (also known as PI-explanations and abductive explanations), how it can be used for determining decision and classifier bias and how it can be used for evaluating counterfactual statements such as “a decision will stick even if ...because ... .” We present a linear-time algorithm for computing the complete reasoning behind a decision, assuming the classifier is represented by a Boolean circuit of appropriate form. We then show how the computed complete reason can be used to answer many queries about a decision in linear or polynomial time. We finally conclude with a case study that illustrates the various notions and techniques we introduced.


Introduction
Consider Fig. 1 which depicts how most machine learning systems are constructed today. We have a labeled dataset that is used to learn a classifier, which is commonly a neural network, a Bayesian network or a random forest. These classifiers are effectively functions that map instances to decisions. For example, an instance could be a loan application and the decision is whether to approve or decline the loan. There is now considerable interest in reasoning about the behavior of such systems. Explaining decisions is at the forefront of current interests: Why did you decline Maya's application? Quantifying the robustness of these decisions is also attracting a lot of attention: Would reversing the decision on Maya require many changes to her application? In some domains, one expects the learned classifiers to satisfy certain properties, like monotonicity, and there is again an interest in proving such properties formally. For example, can we guarantee that a loan applicant will be approved when the only difference they have with another approved applicant is their higher income? These interests, however, are challenged by the numeric nature of machine learning classifiers and the fact that these systems are often model-free, e.g., neural networks, so they appear as black boxes that are hard to analyze.
Even though these machine learning classifiers are learned from data and are numeric in nature, they often implement discrete decision functions. One can therefore extract these functions and represent them symbolically. The outcome of this process is normally a logical formula or a Boolean circuit that precisely captures the inputoutput behavior of the learned classifier, which can then be used to reason about its behavior, symbolically. This includes explaining decisions, measuring robustness and formally proving properties. For a concrete example, consider Fig. 2 which depicts one of the simplest machine learning systems: a Naive Bayes classifier. We have a class variable P and three features B, U and S. Given an instance (patient) and their test results b, u and s, this classifier renders a decision by computing the posterior probability Pr ( p|b, u, s) and then checking whether it passes a given threshold T . If it does, we declare a positive decision; otherwise, a negative decision. While this classifier is numeric and its decisions are based on probabilistic reasoning, it does induce a discrete decision function. In fact, the function is Boolean in this case as it maps the Boolean variables B, U and S, which correspond to test results, into a binary decision (yes or no). This observation was originally made in Chan and Darwiche (2003), which proposed the compilation of Naïve Bayes classifiers into symbolic decision graphs as shown in Fig. 2. The compilation process guarantees that for every instance, the decision made by the (probabilistic) Naïve Bayes classifier is identical to the one made by the (symbolic) decision graph. This compilation algorithm was recently extended to Bayesian network classifiers with tree structures (Shih et al. 2018) and later to Bayesian network classifiers with arbitrary structures (Shih et al. 2019a). 1 Certain classes of neural networks can also be compiled into, or reasoned about using, decision diagrams as shown in Shih et al. (2019b), Shi et al. (2020). While Bayesian and neural networks are numeric in nature, random forests are not (at least the ones with majority voting). Hence, one can easily encode their input-output behavior using  Compiling a Naïve Bayes classifier into a symbolic decision graph. To classify an instance using the decision graph, we start at the root node and repeat the following. If the feature we are at is positive, we follow the left edge, otherwise we follow the left edge. We finally reach a leaf node, which determines the class of the given instance. The figure shows the path followed from the root to a leaf (no) for the instance ( = B, −ve), ( =U , +ve) and ( = S, −ve) Boolean formulas. Since a random forest is an ensemble of decision trees, we first encode each decision tree into a Boolean formula. This is straightforward even in the presence of continuous variables as the learning algorithm discretizes variables by identifying a set of thresholds for each variable. 2 We then combine these formulas using a majority formula or circuit; see, e.g., (Audemard et al. 2020;Choi et al. 2020).
This methodology for reasoning about the behavior of machine learning classifiers has three dimension: (1) the kind of machine learning classifiers are we reasoning about, (2) the symbolic representation we use to encode their input-output behavior, and (3) the class of queries we are interested in and how to compute them efficiently. We will not concern ourselves with the first dimension in this paper as we will assume that the input-output behavior has already been encoded symbolically. Hence, our discussion will be orthogonal to where the symbolic representation came from. As to the second dimension, one can encode input-output behavior using standard logical formulas, which is the approach we shall pursue. While logical formulas are sufficient for our treatment as far as semantics is involved, we will use a particular class of logical representations for computational reasons: tractable Boolean circuits (Darwiche and Marquis 2002). 3 As for the third dimension, we will focus on developing a theory for reasoning about the decisions made by classifiers: What are the reasons behind them? How can they counterfactually change? And are they biased?
In the proposed theory, a classifier is a Boolean function. Its variables are called features, a particular input is called an instance, and the function output on some instance is called a decision. If the function outputs 1 on an instance, the instance and decision are said to be positive; otherwise, they are negative. Our main goal is to explain the decisions made by Boolean classifiers on specific instances by way of providing various insights into what caused these decisions. For some examples, consider Fig. 3 which depicts two classifiers (C 1 and C 2 ) for college admission, represented as Ordered Binary Decision Diagrams (OBDDs) (Bryant 1986) (in which variables are binary and ordered similarly on any path from the root to a leaf). 4 Consider also Susan who passed the entrance exam, is a first-time applicant, has no work experience and a high GPA. Susan will be admitted by classifier C 1 . She also comes from a rich hometown and will be admitted by classifier C 2 . We can say that Susan was admitted by classifier C 1 because she passed the entrance exam and has a high GPA. We can also say that one reason why classifier C 2 admitted Susan is that she passed the entrance exam and has a high GPA (there are other reasons in this case). Moreover, we can say that classifier C 2 would still admit Susan even if she did not have a high GPA because she passed the entrance exam and comes from a rich hometown. Finally, we can say that classifier C 2 can make biased decisions: ones that are based on protected features. For example, it will make different decisions on two applicants who have the same characteristics except that one comes from a rich hometown and the other does not. We will also show that one can sometimes prove classifier bias by inspecting the reasons behind one of its unbiased decisions. We will give formal definitions and semantics for the statements exemplified above and show how to evaluate them algorithmically and efficiently. As far as semantics, the main tool we will employ is Boolean logic and particularly the classical notion of prime implicants (Crama and Hammer 2011;Quine 1952;McCluskey 1956;Quine 1959). On the computational side, we will exploit tractable Boolean circuits as mentioned earlier (Darwiche and Marquis 2002), while providing some new fundamental results that further extend the reach of these circuits to computing explanations. At the core of our theory is the notion of complete reason behind a decision, which can be viewed as a necessary and sufficient condition for why the decision was made. Most of what we shall discuss will be based on complete reasons, both semantically and computationally.
3 See Darwiche (2020) for a recent survey/tutorial on tractable Boolean circuits and their applications in AI. An alternative set of approaches abstract the machine learning classifier into symbolic form and reason about its behavior using SAT-based or SMT-based techniques; see, e.g., (Katz et al. 2017;Leofante et al. 2018;Narodytska et al. 2018;Ignatiev et al. 2019a). 4 OBDDs are one example of tractable Boolean circuits. They can be unfolded into Boolean circuits in linear time (Darwiche and Marquis 2002). Moreover, they allow some hard queries to be answered in polynomial time Bryant (1986). Ordered decision diagrams do not have to be binary. For example, the algorithm of Chan and Darwiche (2003) compiled naïve Bayes classifiers into ordered decision diagrams with discrete variables. This paper is structured as follows. We start in Sect. 2 by reviewing some Boolean logic preliminaries including prime implicants. We then introduce the notion of complete reason and related notions such as sufficient reasons and necessary characteristics in Sects. 3-5. Counterfactual statements about decisions are discussed in Sect. 6, followed by a discussion of decision and classifier bias in Sect. 7. We dedicate Sect. 8 to algorithms that compute the introduced notions and then illustrate them in Sect. 9 using a case study. We finally close with some concluding remarks in Sect. 10.

Boolean Logic Preliminaries
A literal is a Boolean variable X (positive literal) or its negation ¬X (negative literal). A term is a consistent conjunction of literals (e.g., A ∧¬B ∧C). A Disjunctive Normal Form (CNF) is a disjunction of terms (e.g., ). An instance is a term that includes precisely one literal for each Boolean variable. Term τ i subsumes term τ j iff τ j | τ i , where | denotes logical entailment. For example, term E ∧ ¬F subsumes term E ∧ ¬F ∧ G. We treat a term as the set of its literals so we may write τ i ⊆ τ j to also mean that τ i subsumes τ j . We will often refer to a literal as a characteristic and to a term τ as a property (of an instance that contains the term). We use τ to denote the property resulting from negating every characteristic in property τ . We sometimes use a comma (,) instead of a conjunction (∧) when describing properties and instances (e.g., E, ¬F instead of E ∧ ¬F).
We represent a classifier by a Boolean formula Δ whose models (i.e., satisfying assignments) correspond to positive instances. The negation of the formula characterizes negative instances. Classifiers C 1 and C 2 in Fig. 3 are represented by the following formulas: We use Δ(α) to denote the decision (0 or 1) of classifier Δ on instance α. That is, Δ(α) = 1 iff α | Δ and Δ(α) = 0 iff α | ¬Δ. We also define Δ α = Δ if the decision is positive and Δ α = ¬Δ if the decision is negative. This notation is critical and will be used frequently later. By definition, for any two instances α and β, we have Δ α = Δ β iff α | Δ α and Δ(α) = Δ(β). Again, we use this observation frequently later.
An implicant τ of Boolean formula Δ is a term that satisfies Δ, τ | Δ. A prime implicant is an implicant that is not subsumed by any other implicant. For example, E ∧ ¬F ∧ G is an implicant of Δ 1 but is not prime since it is subsumed by another implicant E ∧ ¬F, which happens to be prime. Classifier C 1 has the following prime implicants: Classifier C 2 has the following prime implicants: The set of prime implicants for a Boolean formula can be quite large, which motivated the notion of a prime implicant cover (Quine 1952;McCluskey 1956;Quine 1959). A set of terms τ 1 , . . . , τ n is prime implicant cover for Boolean formula Δ if each term τ i is a prime implicant of Δ and τ 1 ∨. . .∨τ n is equivalent to Δ. A cover may not include all prime implicants, with the missing ones called redundant. While covers can be useful computationally, they may not always be appropriate for explaining classifiers as they may lead to incomplete explanations (more on this later).
We will make use of the conditioning operation on Boolean formulas. To condition formula Δ on term τ , denoted Δ|τ , is to replace every literal l in Δ with constant 1 (true) if l ∈ τ , and to replace it with constant 0 (false) if ¬l ∈ τ . For example, if We will also use the existential quantification operation: ∃X Δ = (Δ|X ) ∨ (Δ|¬X ).
In the next few sections, we introduce notions such as the sufficient and complete reasons behind a decision. We use these notions later to define decision and classifier bias in addition to giving semantics to counterfactual statements relating to decisions.

Sufficient Reasons
Prime implicants have been studied and utilized extensively in the AI and computer science literature. 5 However, their active utilization in explaining decisions is more recent, e.g., (Shih et al. 2018;Ignatiev et al. 2019a, b;Lindner and Möllney 2019). This recent utilization introduced a key connection to properties of instances that we highlight next and exploit computationally later. (Shih et al. 2018)) A sufficient reason for decision Δ(α) is a property of instance α that is also a prime implicant of Δ α (recall Δ α = Δ if the decision is positive and Δ α = ¬Δ if the decision is negative).

Definition 1 (Sufficient Reason
A sufficient reason identifies characteristics of an instance that justify the decision: The decision will stick even if other characteristics of the instance were different. A sufficient reason is minimal: None of its strict subsets can justify the decision. A decision can have multiple sufficient reasons, sometimes a large number of them. 6 There is a key difference between prime implicants and sufficient reasons as the latter must be properties of the given instance. This has significant computational implications that we exploit in Sect. 8. Sufficient reasons were introduced in Shih et al. (2018) under the name of PIexplanations. They were also referred to as abductive explanations in Ignatiev et al. (2019a). 7 The new name we adopt is motivated by further distinctions that we draw later and was also used in Lindner and Möllney (2019). We will sometimes say "a reason" to mean "a sufficient reason." Greg passed the entrance exam, is not a first time applicant, does not have a high GPA but has work experience (α = E, ¬F, ¬G, W ). Classifier C 1 admits Greg, a decision that can be explained using either of the following sufficient reasons: -Passed the entrance exam and is not a first time applicant (E, ¬F).
-Passed the entrance exam and has work experience (E, W ).
Since Greg passed the entrance exam and has applied before, he will be admitted even if his other characteristics were different. Similarly, since Greg passed the entrance exam and has work experience, he will be admitted even if his other characteristics were different.

Proposition 1 Every decision has at least one sufficient reason.
Proof Consider decision Δ(α). We have α | Δ α , which means Δ α is consistent and must have at least one prime implicant (the empty term if Δ α is valid). Moreover, at least one of these prime implicants must be a property of instance α since α | Δ α and since Δ α is equivalent to the disjunction of its prime implicants. Hence, we have at least one sufficient reason for the decision. defined for a given device behavior and is a minimal term representing the health of some device components. Any system state that is compatible with a kernel diagnosis is feasible under the given system behavior. Moreover, the set of kernel diagnoses characterize all feasible system states under the given behavior. 6 The popular Anchor system (Ribeiro et al. 2018) can be viewed as computing approximations of sufficient reasons. The quality of these approximations has been evaluated on some datasets and corresponding classifiers in Ignatiev et al. (2019c), where an approximation is called optimistic if it is a strict subset of a sufficient reason and pessimistic if it is a strict superset of a sufficient reason. 7 In contrast to contrastive explanations which were formalized in  based on Miller (2019). Contrastive explanations can be thought of as answering "why not" queries in contrast to the "why" queries answered by abductive explanations. A minimal hitting set duality between abductive and contrastive explanations was also shown in .
A classifier may make the same decision on two instances but for different reasons (i.e., disjoint sufficient reasons). However, if two decisions on distinct instances share a reason, they must be equal.
We will see later that sufficient reasons can provide insights about a classifier that go well beyond explaining its decisions.

Complete Reasons
A sufficient reason identifies a minimal property of an instance that can trigger a decision. The complete reason behind a decision characterizes all properties of an instance that can trigger the decision.

Definition 2 (Complete Reason)
The complete reason for a decision is the disjunction of all its sufficient reasons.
The complete reason for decision Δ(α) captures every property of instance α, and only properties of instance α, that can trigger the decision. It precisely captures why this particular decision was made on instance α.

Theorem 1 Let R be the complete reason for decision Δ(α). If instance β does not satisfy R and Δ(β) = Δ(α), then no sufficient reason for decision Δ(β) can be a property of instance α.
Proof Suppose β | R and Δ(β) = Δ(α). Then Δ β = Δ α . Let τ be a sufficient reason for decision Δ(β). Then τ is a property of instance β and a prime implicant of both Δ β and Δ α . If τ were a property of instance α, then τ is a sufficient reason for decision Δ(α), τ | R and β | τ | R, a contradiction. Hence, τ cannot be a property of instance α.
We will sometimes say "the reason" to mean "the complete reason." Recall that we also say "a reason" to mean "a sufficient reason." According to Theorem 1, if the same decision is made on instances α and β, and if instance β does not satisfy the complete reason for decision Δ(α), then these decisions were made for different reasons.
Classifier C 1 admits Greg (α = E, ¬F, ¬G, W ) for the reason R = E ∧(¬F ∨W ). Greg was admitted because he passed the entrance exam and satisfied one of two additional requirements: he applied before and has work experience. Classifier C 1 also admits Susan (β = E, F, G, ¬W ). Susan does not satisfy the reason R. There is one sufficient reason for admitting Susan: she passed the entrance exam and has a good GPA (E, G), which is not a property of Greg. Hence, classifier C 1 admitted Greg and Susan for different reasons.
The complete reason behind a decision is unique up to logical equivalence and can be used to enumerate all of the decision's sufficient reasons.

Theorem 2 Let R be the complete reason for decision Δ(α). The prime implicants of R are the sufficient reasons for decision Δ(α).
Proof Let τ 1 , . . . , τ n be the sufficient reasons for decision Δ(α) and hence R = τ 1 ∨ . . . ∨ τ n . The key observation is that each term τ i is a property of instance α. Hence, for every two terms τ i and τ j , if term τ i contains some literal X then term τ j cannot contain literal ¬X . The DNF τ 1 ∨ . . . ∨ τ n is then closed under consensus. 8 Since no term τ i subsumes another term τ j , the DNF τ 1 ∨ . . . ∨ τ n contains all prime implicants of R. Hence, the prime implicants of complete reason R are precisely the sufficient reasons of decision Δ(α).
We will later use Theorem 2 to provide a new approach for enumerating sufficient reasons, compared to earlier approaches such as those reported in Shih et al. (2018), Ignatiev et al. (2019a).
We will close this section by further highlighting how the complete reason for a decision can be viewed as a necessary and sufficient condition for explaining the decision. Consider the complete reason R for decision Δ(α) and recall that it characterizes all properties of instance α that can trigger the decision: R ≡ τ | Δ α τ, where τ is a property of instance α. The reason R is then a logical condition that triggers the decision (R | Δ α ). If the complete reason is weakened into a condition R w that continues to trigger the decision (R | R w | Δ α ), then R w will admit properties not satisfied by instance α. Moreover, if it is strengthened into a condition R s , then R s is guaranteed to trigger the decision (R s | R | Δ α ) but will not admit some properties of instance α that can trigger the decision. Hence, the complete reason R is a necessary and sufficient condition for explaining the decision on instance α.

Necessary Characteristics and Properties
The necessary property of a decision is a maximal property of an instance that is essential for explaining the decision on that instance.

Definition 3 (Necessary Characteristics and Properties)
A characteristic is necessary for a decision if it appears in every sufficient reason for the decision. The necessary property for a decision is the set of all its necessary characteristics.
The necessary property is unique but could be empty (when the decision has no necessary characteristics). If an instance ceases to satisfy one necessary characteristic, the corresponding decision is guaranteed to change.
If an instance ceases to satisfy more than one necessary characteristic, the decision does not necessarily change. However, if the decision sticks then it would be for completely different reasons.
Theorem 3 Let β be an instance that disagrees with instance α on at least one characteristic necessary for decision Δ(α). Decisions Δ(α) and Δ(β) must have disjoint sufficient reasons.
Proof Let σ be the necessary characteristics of decision Δ(α) that instances α and β disagree on. A sufficient reason τ of Δ(α) cannot be a property of instance β since σ ⊆ τ and β contains σ . Hence, τ cannot be a sufficient reason for decision Δ(β) and the two decisions must have disjoint sufficient reasons.

Consider a classifier
The decision Δ(α) is positive with X , Y , Z as the only sufficient reason. Hence, all three characteristics of α are necessary: Flipping any single characteristic of instance α will lead to a negative decision. However, flipping the two characteristics X and Y preserves the positive decision but leads to a new, single sufficient reason ¬X , ¬Y , Z .
The complete reason for a decision has enough information to compute its necessary characteristics and property.

Proposition 4 A characteristic is necessary for a decision iff it is implied by the decision's complete reason.
Proof Follows from Definition 3 and Theorem 2.

Decision Counterfactuals
We mentioned Susan earlier who passed the entrance exam, is a first time applicant, has a high GPA but no work experience (α = E, F, G, ¬W ). Classifier C 1 admits Susan because she passed the entrance exam and has a high GPA as this is the only sufficient reason for the decision. Greg was also admitted by this classifier. His application is similar to Susan's except that he applied before and has work experience (β = E, ¬F, G, W ). The decision on Greg has multiple sufficient reasons so we cannot issue a "because" statement when explaining this decision.
Definition 4 (Because) Consider decision Δ(α) and property τ of instance α. We say the decision is made because τ if τ is the only sufficient reason for the decision.
Proposition 5 Consider decision Δ(α) and property τ of instance α. The decision is made because τ iff τ is the decision's complete reason.
Proof Follows from Definitions 1 and 2.
One may be interested in statements that provide insights into a decision beyond the reasons behind it. For example, we may want to know how the classifier may have decided an instance if some of its characteristics were to be different. An example of this is the statement we mentioned in Sect. 1 with regards to classifier C 2 : Susan would still be admitted even if she did not have a high GPA because she comes from a rich hometown and passed the entrance exam. This statement exemplifies counterfactuals of the following form: The decision will stick even if ρ because τ , where ρ and τ are properties of the given instance. Recall that ρ is the property which results from flipping every characteristic in property ρ.
Definition 5 (Even-If-Because) Consider decision Δ(α) and properties ρ and τ of instance α. We say the decision sticks even if ρ because τ if τ is the complete reason for decision Δ(β), where instance β is the result of replacing property ρ in instance α with property ρ.
The following result justifies the above definition.
Proof Suppose decision Δ(α) sticks even if ρ because τ , and let β be the described instance. By Definition 5, τ is the complete reason for decision Δ(β). Since τ is a property, it must be the only sufficient reason for decision Δ(β) by Theorem 2. Hence, τ is a property of instance β and must therefore be disjoint from property ρ since flipping the characteristics of ρ in instance α left property τ intact. Since property τ justifies decision Δ(β), τ | Δ β , and since τ is also a property of instance α, α | τ , we now have α | τ | Δ β and therefore Δ(β) = Δ(α).
Applicant Susan who we discussed earlier (α = E, F, G, ¬W , R) is admitted by classifier C 2 . The decision will stick even if Susan had a low GPA (¬G) because she comes from a rich hometown and passed the entrance exam (E, R). This statement is justified since E, R is the complete reason for decision Δ(β). Here, β = E, F, ¬G, ¬W , R is the result of replacing characteristic G by ¬G in instance α.
Jackie did not pass the entrance exam, is not a first time applicant, has a low GPA but has work experience (α = ¬E, ¬F, ¬G, W ). Jackie is denied admission by classifier C 1 . The decision will stick even if Jackie had a high GPA (G) because she did not pass the entrance exam (¬E). This statement is justified since ¬E is the complete reason for decision Δ(β), where β = ¬E, ¬F, G, W is the result of replacing characteristic ¬G by G in instance α.
Intuitively, a decision is biased if it depends on some protected features: ones that should not be used when making the decision (e.g., gender, zip code, or ethnicity). 9 We formalize bias next while making a distinction between classifier bias and decision bias. A classifier is biased if it makes some biased decisions, yet some of the other decisions it makes may still be unbiased. While classifier bias can always be detected by examining its decision function, we will show that it can sometimes be detected by examining the complete reason behind one of its unbiased decisions.
Bias can be positive or negative. For example, an applicant may be admitted because they come from a rich hometown, or may be denied admission because they did not come from a rich hometown. The following result provides a necessary and sufficient condition for detecting decision bias.

Theorem 5 A decision is biased iff each of its sufficient reasons contains at least one protected feature.
Proof We will show both directions of the theorem next.
Suppose every sufficient reason for decision Δ(α) contains at least one protected feature. Let X be these protected features and let τ be the characteristics of instance α that do not involve features X. Assume Δ(α) = Δ(β) for every instance β that agrees with instance α on characteristics τ (that is, β disagrees with α only on the protected features X). Term τ must then be an implicant of Δ α and a subset σ of τ must be a prime implicant of Δ α (could be τ itself). Since τ is a property of instance α, decision Δ(α) has sufficient reason σ that does not include a protected feature in X, which is a contradiction. Hence, Δ(α) = Δ(β) for some instance β that disagrees with instance α on only protected features in X, and decision Δ(α) is biased.
We emphasize that Theorem 5 does not require sufficient reasons to share protected features, only that each must contain at least one protected feature.
Consider classifier C 3 , which admits applicants who have a good GPA (G) as long as they pass the entrance exam (E), are male (M) or come from a rich hometown (R): Bob has a good GPA, did not pass the entrance exam and comes from a rich hometown (α = G, ¬E, M, R). He is admitted with two sufficient reasons: G, M and G, R. The decision is biased since each sufficient reason contains a protected feature (M and R). This classifier will not admit Nancy who has similar characteristics but does not come from a rich hometown: β = G, ¬E, ¬M, ¬R. It will also admit Scott who has the same characteristics as Nancy: γ = G, ¬E, M, ¬R. Even though this classifier is biased, some of its decisions may be unbiased. If an applicant has a good GPA and passes the entrance exam (G, E), they will be admitted regardless of their protected characteristics. Moreover, if an applicant does not have a good GPA (¬G), they will be denied admission regardless of their other characteristics, including protected ones.

Definition 7 (Classifier Bias)
A classifier is biased if at least one of its decisions is biased.
We emphasize again that a biased classifier may still make some unbiased decisions. As we show next, one can sometimes infer classifier bias by inspecting the sufficient reasons behind one of its unbiased decisions.

Theorem 6 A classifier is biased iff one of its decisions has a sufficient reason that includes at least one protected feature.
Proof We will next show both directions of the theorem.
Suppose classifier Δ is biased. By Definition 7, some decision Δ(α) is biased. By Theorem 5, every sufficient reason of decision Δ(α) must contain at least one protected feature.
If decision Δ(α) has protected features in some but not all of its sufficient reasons, the decision is not biased according to Theorem 5. But classifier Δ is biased according to Theorem 6 as it will make a biased decision on some other instance β = α.
Consider classifier C 3 in (1) and Lisa who has a good GPA, passed the entrance exam and comes from a rich hometown (G, E, ¬M, R). The classifier will admit Lisa for two sufficient reasons: G, E and G, R. The decision is unbiased: any applicant who has similar unprotected characteristics will be admitted. However, since one of the sufficient reasons contains a protected feature, the classifier is biased as it can make a biased decision on a different applicant. The proof of Theorem 6 suggests that the classifier will make different decisions on two applicants with a good GPA who disagree only on whether they come from a rich hometown. Nancy (G, ¬E, ¬M, ¬R) and Heather (G, ¬E, ¬M, R) are such applicants.
The following theorem shows how one can detect decision bias using the complete reason behind the decision. We will use this theorem (and Theorem 8) when discussing algorithms in Sect. 8.
Theorem 7 A decision is biased iff ∃(X 1 , . . . , X n )R is not valid where X 1 , . . . , X n are all unprotected features and R is the complete reason behind the decision.
Proof Let τ 1 , . . . , τ n be the decision's sufficient reasons and hence R = τ 1 ∨ . . . ∨ τ n . Existentially quantifying variables X i from a DNF is done by replacing their literals with 1. The result is valid iff some term τ i contains only variables in X 1 , . . . , X n . Hence, ∃X 1 , . . . , X n R is not valid iff each term τ i contains variables beyond X i (i.e., each sufficient reason contains protected features).
The following result shows how classifier bias can sometimes be detected based on the complete reason behind an unbiased decision.

Theorem 8 A classifier is biased if R|X ≡ R|¬X where X is a protected feature and R is the complete reason for some decision.
Proof Given Theorems 2 and 6, it is sufficient to show that R|X ≡ R|¬X iff feature X appears in some prime implicant of R. Let τ 1 , . . . , τ n be the prime implicants of R. Feature X appears either positively or negatively in these prime implicants since terms τ i are all properties of the same instance. Suppose without loss of generality that feature X appears positively in terms τ i (if any). Then R|X ≡ Theorem 8 follows from Theorems 2 and 6 and a known result: A Boolean function depends on a variable X iff X appears in one of its prime implicants. We included the full proof for completeness.

Computing Reasons and Related Queries
The enumeration of PI-explanations (sufficient reasons) was treated in Shih et al. (2018) by modifying the algorithm in  for computing prime implicant covers; see also Minato 1993). The modified algorithm optimizes the original one by integrating the instance into the prime implicant enumeration process, but we are unaware of a complexity bound for the original algorithm or its modification. Moreover, since the algorithm is based on prime implicant covers, it is incomplete. Consider classifier Δ = (X ∧ Z ) ∨ (Y ∧ ¬Z ), which has three prime implicants: (X ∧ Z ), (Y ∧ ¬Z ) and (X ∧ Y ). The last prime implicant is redundant and may not be generated when computing a cover. Instance α = X , Y , Z leads to a positive decision and two sufficient reasons: (X ∧ Z ) and (X ∧ Y ). An algorithm based on covers may miss the sufficient reason (X ∧ Y ) and is therefore incomplete. This can be problematic for queries that rely on examining all sufficient reasons, such as decision and classifier bias (Definitions 6 and 7). We next propose a new approach based on computing the complete reason R for a decision (Definition 2), which characterizes all sufficient reasons, and then use it to compute multiple queries. For example, we can enumerate all sufficient reasons using the reason R (Theorem 2). We can also use it to compute necessary characteristics (Proposition 4) and to detect decision bias (Theorem 7). Even classifier bias can sometimes be inferred directly using the reason R (Theorem 8) among other queries.
Assuming the classifier is represented using a suitable tractable Boolean circuit, our approach will compute the complete reason for a decision in linear time regardless of how many sufficient reasons it may have (could be exponential). Moreover, it will ensure that the computed complete reason is represented by a tractable circuit, allowing us to answer many queries in polytime.

Computing Complete Reasons
Our approach for computing complete reasons requires the classifier Δ and its negation ¬Δ to be represented as Decision-DNNF circuits, which we define next.

Definition 8 (Decision-DNNF Circuit)
An NNF circuit is a Boolean circuit that has literals or constants as inputs and two type of gates: and-gates and or-gates. A DNNF circuit is an NNF circuit in which the subcircuits feeding into each and-gate share no variables. 10 A Decision-DNNF circuit is a DNNF circuit in which every or-gate has exactly two inputs of the form: X ∧ μ and ¬X ∧ ν, where X is a variable. 11 DNNF circuits were introduced in Darwiche (2001). Decision-DNNF circuits were identified in Darwiche (2005, 2007). OBDDs which we discussed earlier are a subset of Decision-DNNF circuits as one can convert an OBDD into a Decision-DNNF circuit in linear time. Figure 4 depicts an OBDD and its corresponding Decision-DNNF circuit. The circuit is obtained by mapping each OBDD node with variable X , high child μ and low child ν into the circuit fragment (X ∧ μ) ∨ (¬X ∧ ν) (two and-gates and one or-gate). For more on Decision-DNNF circuits and OBDD, see (Bryant 1986;Darwiche and Marquis 2002;Huang and Darwiche 2007;Oztok and Darwiche 2014). One can obtain a Decision-DNNF circuit by compiling a Boolean formula in Conjunctive Normal Form (CNF) using systems such as c2d 12 (Darwiche 2004) and d4 13 (Lagniez and Marquis 2017). One can also compile an OBDD from any Boolean formula using systems such as cudd. 14 We compute the complete reason for a decision Δ(α) by applying two operations to a Decision-DNNF circuit for Δ α : consenus then filtering.
Definition 9 (Consensus Circuit) The consensus circuit of Decision-DNNF circuit Γ is denoted consensus(Γ ) and obtained by adding input μ ∧ ν to every or-gate with inputs X ∧ μ and ¬X ∧ ν.

Figure 4 depicts a Decision-DNNF circuit and its consensus circuit (third from left).
The consensus operation adds four and-gates denoted with double circles. A consensus circuit can be obtained from a Decision-DNNF circuit in time linear. We next discuss the filtering of a consensus circuit, which leads to a tractable circuit.
Filtering is defined only on consensus circuits and requires an instance that satisfies the consensus circuit (we are only interested in such instances). Figure 4 depicts an example. The filtered circuit is on the far right of the figure, where grayed out nodes and edges can be dropped due to replacing literals by constant 0.
Filtering is also a linear time operation. Consensus preserves models (i.e., satisfying assignments of the circuit), but filtering drops some of them. We will characterize the models preserved by filtering after presenting two required results.
Let Γ be a circuit that results from filtering by instance α. The circuit is monotone in the following sense. If the common literals between instances α and β are a subset of the common literals between instances α and γ , then β | Γ only if γ | Γ . For example, if α = X , Y , Z , β = ¬X , Y , ¬Z and γ = ¬X , Y , Z , then α and β agree on literals {Y } while α and γ agree on literals {Y , Z } so the condition is met in this case.
Theorem 9 If circuit Γ results from filtering by instance α then every literal l that appears in Γ also appears in α. Moreover, Proof Filtering removes every literal not in instance α. Hence, every literal in the filtered circuit Γ is in α, which implies the next result. Suppose that γ ∩ α ⊇ β ∩ α and Γ (β) = 1. When evaluating circuit Γ at γ compared to β, the only literals that change values are l 1 ∈ γ \ β and l 2 ∈ β \ γ . Literals l 1 change values from 0 to 1 and literals l 2 change values from 1 to 0. Changes to the values of l 1 cannot decrease the output of circuit Γ since it is an NNF circuit. Literals l 2 are not in α since γ ∩ α ⊇ β ∩ α so do not appear in circuit Γ and changes to their values do not matter. Hence, Γ (γ ) = 1.
We also need the following result which identifies circuit models that are preserved by the filtering of a consensus circuit.
Suppose τ is a prime implicant of circuit Δ and α | τ . Then τ is an implicant of circuit Γ by Proposition 7, τ | Γ . If τ is not a prime implicant of Γ , we must have some term ρ ⊂ τ such that ρ | Γ . Therefore ρ | Δ since Γ | Δ, which means that τ is not a prime implicant of Δ, a contradiction. Hence, τ is a prime implicant of Γ .
The circuit reason(Δ, α) depends on the specific Decision-DNNF circuit Γ used to represent Δ α but will always have the same models.

Tractability of Reason Circuits
We next show that reason circuits are tractable. Since we represent the complete reason for a decision as a reason circuit, many queries relating to the decision can then be answered efficiently.
Definition 12 (Monotone) An NNF circuit is monotone if every variable appears only positively or only negatively in the circuit.
Reason circuits are filtered circuits and hence monotone as shown by Theorem 9. The following theorem mirrors what is known about monotone Boolean formulas, but we include it for completeness.

Theorem 11
The satisfiability of a monotone NNF circuit can be decided in linear time. A monotone NNF circuit can be negated and also conditioned in linear time to yield a monotone NNF circuit.
Proof The satisfiability of a monotone NNF circuit can be decided using the following procedure. Constant 0 is not satisfiable. Constant 1 and literals are satisfiable. An or-gate is satisfiable iff any of its inputs is satisfiable. An and-gate is satisfiable iff all its inputs are satisfiable. All previous statements are always correct except the last one which depends on monotonicity. Consider a conjunction μ ∧ ν and suppose every variable shared between the conjuncts appears either positively or negatively Algorithm 1 PI(Δ, α) input: Decision-DNNF circuit Δ and instance α such that Δ(α) = 1. output: Prime implicants of circuit filter(consensus(Δ), α). 1: if cache(Δ) is set then 2: return cache(Δ) 3: else if Δ is constant 0 then 4: r = p ∪ q 14: r = remove_subsumed(r ) 15: cache(Δ) = r 16: return r in both. Any model of μ can be combined with any model of ν to form a model for μ ∧ ν. Hence, the conjunction is satisfiable iff each of the conjuncts is satisfiable. Conditioning replaces literals by constants so it preserves monotonicity. To negate a monotone circuit, replace and-gates by or-gates, or-gates by and-gates and literals by their negations. Monotonicity is preserved.
Given Theorem 11, the validity of a monotone NNF circuit can be decided in linear time (we check whether the negated circuit is unsatisfiable). 15 We can also conjoin the circuit with a literal in linear time to yield a monotone circuit since Δ ∧ l = (Δ|l) ∧ l.
Variables can be existentially quantified from a monotone circuit in linear time, with the resulting circuit remaining monotone. This is critical for efficiently detecting decision bias as shown by Theorem 7.
Theorem 12 Replacing every literal of variable X with constant 1 in a monotone NNF circuit Γ yields a monotone NNF circuit equivalent to ∃X Γ .

Proof
If variable X appears only positively in circuit Γ then Γ |¬X | Γ |X and If variable X appears only negatively in Γ then Γ |X | Γ |¬X and ∃X Γ = (Γ |X ) ∨ (Γ |¬X ) = Γ |¬X . Variable X can therefore be existentially quantified by replacing its literals with constant 1.

Computing Queries
We can now discuss algorithms. To compute the sufficient reasons for a decision Δ(α): get a Decision-DNNF circuit for Δ α , transform it into a consensus circuit, filter it by instance α and finally compute the prime implicants of filtered circuit. Algorithm 1 does this in place, that is without explicitly constructing the consensus or filtered circuits. It assumes a positive decision (otherwise we pass ¬Δ).
Algorithm 1 uses subroutine cartesian_product which conjoins two DNFs by computing the Cartesian product of their terms. It also uses remove_subsumed to remove subsumed terms from a DNF.
Proof Consensus and filtering are applied implicitly on Lines 10-11. Filtered circuit are monotone. We compute the prime implicants of a monotone circuit by converting it into DNF and removing subsumed terms (Crama and Hammer 2011, Chapter 3). This is precisely what Algorithm 1 does.
Consider now a decision Δ(α) and its complete reason R = reason(Δ, α), which is a monotone NNF circuit. Let n be the size of circuit R and m be the number of features. We next show how to compute various queries using circuit R. Sufficient Reasons. By Theorems 2 and 13, the call PI(Δ α , α) to Algorithm 1 will return all sufficient reasons for decision Δ(α), assuming Δ α is a Decision-DNNF circuit. The number of sufficient reasons can be exponential, but we can actually answer many questions about them without enumerating them directly as shown below. Necessary Property. By Proposition 4, characteristic (literal) l is necessary for decision Δ(α) iff R | l. This is equivalent to R|¬l being unsatisfiable, which can be decided in O(n) time given Theorem 11. The necessary property (all necessary characteristics) can then be computed in O(n · m) time. Because Statements. To decide whether decision Δ(α) was made "because τ " we check whether property τ is the complete reason for the decision (Definition 4): τ | R and R | τ . We have τ | R iff ¬R|τ is unsatisfiable. Moreover, R | τ iff R|¬l is unsatisfiable for every literal l in τ . All of this can be done in O(n · |τ |) time. Even if, Because Statements. To decide whether decision Δ(α) would stick "even if ρ because τ " we replace property ρ with ρ in instance α to yield instance β (Definition 5). We then compute the complete reason for decision Δ(β) and check whether it is equivalent to τ . All of this can be done O(n · |τ |) time. Decision Bias. To decide whether decision Δ(α) is biased we existentially quantify all unprotected features from circuit R and then check the validity of the result (Theorem 7). All of this can be done in O(n) time given Theorems 11 and 12.

A Case Study
We now consider a more refined admission classifier to illustrate the notions and concepts we introduced more comprehensively.
This classifier highly values passing the entrance exam and being a first time applicant. However, it also gives significant leeway to students from a rich hometown. In fact, being from a rich hometown unlocks the only path to acceptance for those who failed the entrance exam. The classifier is depicted as an OBDD in Fig. 5. It corresponds to the following Boolean formula, which is not monotone (the previous   (Fig. 6) classifiers we considered were all monotone): The classifier has the following prime implicants, some are not essential (all prime implicants of a monotone formula are essential): We will consider applicants Scott, Robin and April in Fig. 6, where feature R is protected (whether the applicant comes from a rich hometown). The complete reasons for the decisions on these applicants are shown in Fig. 7. These are reason circuits produced as suggested by Definition 11, except that we simplified the circuits by propagating and removing constant values (a reason circuit is satisfiable as it must be satisfied by the instance underlying the decision).
The decision on applicant Scott is biased. To check this, we can existentially quantify unprotected features E, F, G, W from the reason circuit in Fig. 7 and then check its validity (Theorem 7). Existential quantification is done by replacing the literals E, ¬F, G, W in the circuit with constant 1. The resulting circuit is not valid. We can also confirm decision bias by considering the sufficient reasons for this decision, which all contain the protected feature R (Theorem 5): If we flip the protected characteristic R to ¬R, the decision will flip with the complete reason being ¬F, ¬R so Scott would be denied admission because he is not a first time applicant and does not come from a rich hometown (Definition 4).
The decision on Robin is not biased. If we existentially quantify unprotected features E, F, G, W from the reason circuit (by replacing their literals with constant 1), the circuit becomes valid. We can confirm this using the decision's sufficient reasons: Two of these sufficient reasons do not contain the protected feature so the decision cannot be biased (Theorem 5). The decision will be the same on any applicant with the same characteristics as Robin except for the protected feature R. However, since some of the sufficient reasons contain a protected feature, the classifier must be biased (Theorem 6): It will make a biased decision on some other applicant. This illustrates how classifier bias can be inferred from the complete reason behind one of its unbiased decisions. This method is not complete though: the classifier may still be biased even if no protected feature appears in a sufficient reason for one of its decisions.
The decision on April is not biased even though the protected feature R appears in the reason circuit (the circuit is valid if we existentially quantify all features but R). Moreover, E, F are all the necessary characteristics for this decision (i.e., the necessary property). Flipping either of these characteristics will flip the decision. Recall that violating the necessary property may either flip the decision or change the reason behind it (Theorem 3) but flipping only one necessary characteristic is guaranteed to flip the decision (Proposition 3).
The decision on April would stick even if she were not to have work experience (¬W ) because she passed the entrance exam (E), has a good GPA (G) and is a first time applicant (F). April would be denied admission if she were to also violate one of these characteristics (Definition 5 and Proposition 3).
We close this section by an important remark. Even though most of the notions we defined are based on prime implicants, our proposed theory does not necessarily require the computation of prime implicants which can be prohibitive. Reason circuits characterize all relevant prime implicants and can be obtained in linear time from Decision-DNNF circuits. Reason circuits are also monotone, allowing one to answer many queries about the embedded prime implicants in polytime. This is a major contribution of this work.

Concluding Remarks
We introduced a theory for reasoning about the decisions of Boolean classifiers, which is based on the fundamental notion of complete reasons. We presented applications of the theory to explaining decisions, evaluating counterfactual statements about decisions and identifying decision and classifier bias. We showed that if classifiers are represented by Decision-DNNFs, which are a superset of OBDDs, then the complete reason for a decision can be computed in linear time and in the form of a tractable Boolean circuit that we called a reason circuit. We then presented linear-time and polytime algorithms for computing most of the introduced notions based on reason circuits. More recently, the notion of a complete reason was formulated using quantified Boolean logic and shown to be also computable efficiently when classifiers are represented by CNFs or SDDs (Darwiche and Marquis 2021). An SDD is a decision diagram that branches on formulas (sentences) instead of variables (SDD stands for Sentential Decision Diagram) (Darwiche 2011). SDDs are also a superset of OBDDs but they are not comparable to Decision-DNNFs in terms of succinctness (Bollig and Buttkus 2019;Beame and Liew 2015;Beame et al. 2013).
There has been a significant interest recently in the computation and complexity of explanation queries, particularly sufficient reasons. This included investigations into the computation of shortest sufficient reasons which are length-minimal instead of subset-minimal. For Naïve Bayes (and linear) classifiers, it was shown that one sufficient reason can be generated in log-linear time, and all sufficient reasons can be generated with polynomial delay . For decision trees, the complexity of generating one sufficient reason was shown to be in polynomial time (Izza et al. 2020). Later works showed the same complexity for decision graphs (Huang et al. 2021b) and some classes of tractable circuits (Audemard et al. 2020;Huang et al. 2021a). The generation of sufficient reasons for decision trees was also studied in Audemard et al. (2021b), including the generation of shortest sufficient reasons which was shown to be hard even for a single reason. The generation of shortest sufficient reasons was also studied in a broader context that includes decision graphs and SDDs (Darwiche and Ji 2022). More general studies of complexity were also conducted in Audemard et al. (2020), Huang et al. (2021a), where classifiers where categorized based on the tractable circuits that represent them (Huang et al. 2021a) or the kinds of processing they permit in polynomial time (Audemard et al. 2020). The complexity of robustness queries and shortest sufficient reasons was studied in Barceló et al. (2020) for Boolean classifiers which correspond to decision graphs and neural networks with ReLU activation functions. A comprehensive study of complexity was presented recently in Audemard et al. (2021a) for a large set of explanation queries and classes of Boolean classifiers.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.