Abstract
Chain reduction enables reduced ordered binary decision diagrams (BDDs) and zerosuppressed binary decision diagrams (ZDDs) to each take advantage of the others’ ability to symbolically represent Boolean functions in compact form. For any Boolean function, its chainreduced ZDD (CZDD) representation will be no larger than its ZDD representation, and at most twice the size of its BDD representation. The chainreduced BDD (CBDD) of a function will be no larger than its BDD representation, and at most three times the size of its CZDD representation. Extensions to the standard algorithms for operating on BDDs and ZDDs enable them to operate on the chainreduced versions. Experimental evaluations on representative benchmarks for encoding word lists, solving combinatorial problems, and operating on digital circuits indicate that chain reduction can provide significant benefits in terms of both memory and execution time.
Download conference paper PDF
1 Introduction
Decision diagrams (DDs) encode sets of values in compact forms, such that operations on the sets can be performed on the encoded representation, without expanding the sets into their individual elements. In this paper, we consider two classes of decision diagrams: reduced ordered binary decision diagrams (BDDs) [4] and zerosuppressed binary decision diagrams (ZDDs) [11, 12]. These two representations are closely related to each other, with each achieving more compact representations for different classes of applications. We present extensions to both representations, such that BDDs can take advantage of the source of compaction provided by ZDDs, and viceversa.
Both BDDs and ZDDs encode sets of binary sequences of some fixed length n, defining a Boolean function over n variables. We can bound their relative sizes as follows. Suppose for some function, we encode it according to the different DD types. For function f, let T(f) indicate the number of nodes (including leaf nodes) in the representation of type T. Let \(R_{f}(T_1, T_2)\) denote the relative sizes when representing f using types \(T_1\) and \(T_2\):
Comparing BDDs and ZDDs, Knuth [9] has shown that for any function f:
As these bounds show, ZDDs may be significantly (a factor of n / 2) more compact than BDDs, or viceversa. In practice, the comparative advantage of one representation over the other can be very significant, given that the size of the data structure is often the limiting factor in the use of DDs.
In this paper, we introduce two new representations: chainreduced ordered binary decision diagrams (CBDDs), and chainreduced zerosuppressed binary decision diagrams (CZDDs). The key idea is to associate two levels with each node and to use such nodes to encode particular classes of linear chains found in BDDs and ZDDs. Chain reduction can be defined in terms of a set of reduction rules applied to BDDs and ZDDs, giving bounds for any function f
We show bounds on the relative sizes of the representations as:
These relations are summarized in the diagram of Fig. 1. In this figure, each arc from type \(T_1\) to type \(T_2\) labeled by an expression E indicates that \(R_{f}(T_1, T_2) \le E + o(E)\). We also show these bounds are tight, by demonstrating parameterized families of functions that achieve the bounding factors of (5) and (6).
These results indicate that the two compressed representations will always be within a small constant factor (2 for CZDDs and 3 for CBDDs) of either a BDD or a ZDD representation. While one representation may be more slightly compact than the other, the relative advantage is bounded by a constant factor, and hence choosing between them is less critical.
This paper defines the two compressed representations, derives the bounds indicated in (5) and (6) and presents extensions of the core BDD and ZDD algorithms to their chained versions. It describes an implementation based on modifications of the CUDD BDD package [14]. It presents some experimental results and concludes with a discussion of the merits of chaining and possible extensions.
2 Related Work
In independent work, van Dijk and his colleages devised a hybrid of BDDs and ZDDs they call tagged BDDs [6]. Their representation augments BDDs by associating a variable with each edge, in addition to the variable associated with each node, enabling them to represent both BDD and ZDD reductions along each edge. For any function, a tagged BDD is guaranteed to have no more nodes than either its BDD or its ZDD representation. They avoid the constant factor in node growth that CBDDs or CZDDs may require, at the cost of requiring storage for three variables per node (one for the node, and one for each of the outgoing edges) versus two. Choosing between their representation or ours depends on a number of implementation factors. Both achieve the larger goal of exploiting the reductions enabled by both BDDs and ZDDs.
3 BDDs and ZDDs
Both BDDs and ZDDs encode sets of binary sequences of length n as directed acyclic graphs with two leaf nodes, labeled with values \(\mathbf{0}\) and \(\mathbf{1}\), which we refer to as “leaf \(\mathbf{0}\)” and “leaf \(\mathbf{1}\),” respectively. Each nonleaf node v has an associated level l, such that \(1 \le l \le n\), and two outgoing edges, labeled \({ lo}\) and \({ hi}\) to either a leaf node or a nonleaf node. By convention, leaf nodes have level \(n+1\). An edge from v to node u having level \(l'\) must have \(l < l'\).
Figure 2 shows three decisiondiagram representations of the set S, defined as:
The \({ lo}\) edge from each node is shown as a dashed line, and the \({ hi}\) edge is shown as a solid line. As a shorthand, we omit leaf \(\mathbf{0}\) and all branches to it.
Graph A represents S as a levelized binary decision diagram, where an edge from a node with level l must connect to either leaf \(\mathbf{0}\) or to a node with level \(l+1\). Each path from the root to leaf \(\mathbf{1}\) encodes an element of set S. For a given path, the represented sequence has value 0 at position l when the path follows the \({ lo}\) edge from the node with level l and value 1 when the path follows the \({ hi}\) edge.
Graph A has nodes forming two linear chains: a don’tcare chain, consisting of nodes a and b, and an or chain, consisting of nodes d, e, and f. A don’tcare chain is a series of don’tcare nodes, each having its two outgoing edges directed to the same next node. In terms of the set of represented binary sequences, a don’tcare node with level l allows both values 0 and 1 at sequence position l. An or chain consists of a sequence where the outgoing \({ hi}\) edges for the nodes all go the same node—in this case, leaf \(\mathbf{0}\). An or chain where all \({ hi}\) edges lead to leaf \(\mathbf{0}\) has only a single path, assigning value 0 to the corresponding positions in the represented sequence. We will refer to this special class of or chain as a zero chain.
BDDs and ZDDs differ from each other in the interpretations they assign to a levelskipping edge, when a node with level l has an edge to a node with level \(l'\) such that \(l + 1 < l'\). For BDDs, such an edge is considered to encode a don’tcare chain. Thus, graph B in Fig. 2 shows an BDD encoding set S. The edge on the left from level 1 to level 4 is equivalent to the don’tcare chain formed by nodes a and b of graph A. For ZDDs, a level skipping edge encodes a zero chain. Thus, graph C shows a ZDD encoding set S. The edge on the right from level 1 to the leaf encodes the zero chain formed by nodes d, e, and f of graph A. Whether the set is encoded as a BDD or a ZDD, one type of linear chains remains. Introducing chain reduction enables BDDs and ZDDs to exploit both don’tcare and or (and therefore zero) chains to compress their representations.
4 Chain Patterns and Reductions
Figure 3 shows the general form of or and don’tcare chains, as were illustrated in the examples of Fig. 2. These chains have levels ranging from t to b, such that \(1 \le t < b \le n\). Each form consists of a linear chain of nodes followed by two nodes f and g with levels greater than b. Nodes f and g are drawn as triangles to indicate that they are the roots of two subgraphs in the representation. In an or chain, the \({ lo}\) edge from each node is directed to the next node in the chain, and the \({ hi}\) edge is directed to node g. The chains eliminated by ZDDs are a special case where \(g = \mathbf{0}\). In a don’tcare chain, both the \({ lo}\) and the \({ hi}\) edges are directed toward the next node in the chain.
As was illustrated in Fig. 2, having edges that skip levels allows BDDs to compactly represent don’tcare chains and ZDDs to eliminate or chains when \(g = \mathbf{0}\). The goal of chain reduction is to allow both forms to compactly represent both types of chains. They do so by associating two levels with each node, as indicated in Fig. 3(C). That is, every nonleaf node has an associated pair of levels , such that \(1 \le t \le b \le n\). In a chainreduced ordered binary decision diagram (CBDD), such a node encodes the or chain pattern shown in Fig. 3(A), while in a chainreduced zerosuppressed binary decision diagram (CZDD), such a node encodes the don’tcare chain pattern shown in Fig. 3(B). A node with levels t and b such that \(t=b\) encodes a standard node with respect to the indicated variable.
Figure 4 shows the effect of chain reduction, starting with the levelized graph A. In the CBDD (B), a single node \(f'\) replaces the or chain consisting of nodes d, e, and f. In the CZDD (C), the don’tcare chain consisting of nodes a and b is incorporated into node c to form node \(c'\). These new nodes are drawn in elongated form to emphasize that they span a range of levels, but it should be emphasized that all nodes in a chained representation have an associated pair of levels.
To generalize from these examples, let us denote a node of the form illustrated in Fig. 3(C) with the modified ifthenelse notation . That is, the node has a range of levels from t to b, an outgoing \({ hi}\) edge to node g, and an outgoing \({ lo}\) edge to node f.
A BDD representation of a function can be transformed into a CBDD as follows. The process starts by labeling each node having level l in the BDD with the pair , such that \(t = b = l\). Then, we repeatedly apply a reduction rule, replacing any pair of nodes u and v of the form and by the single node . A similar process can transform any ZDD representation of a function into a CZDD, using the reduction rule that a pair of nodes u and v of the form and is replaced by the single node . In practice, most algorithms for constructing decision diagrams operate from the bottom up. The reduction rules are applied as nodes are created, and so unreduced nodes are never actually generated.
5 Size Ratio Bounds
These reduction rules allows us to bound the relative sizes of the different representations, as given by (5) and (6).
First, let us consider (5), bounding the relative sizes of the CBDD and CZDD representations of a function. Consider a graph G representing function f as a CZDD. We can generate a CBDD representation \(G'\) as follows. \(G'\) contains a node \(v'\) for each node v in G. However, if v has levels , then \(v'\) has levels , because any don’tcare chain encoded explicitly in the CZDD is encoded implicitly in a CBDD.
Consider an edge from node u to node v in G, where the nodes have levels and , respectively. If \(t_v = b_u+1\), then there can be an edge directly from \(u'\) to \(v'\). If \(t_v < b_u+1\), then we introduce a new node to encode the implicit zero chain in G from u to v. This node has the form and has an edge from \(u'\) to it.
The size of \(G'\) is bounded by the number of nodes plus the number of edges in G. Since each node in G has at most two outgoing edges, we can see that \(G'\) has at most three times the number of nodes as G. Graph \(G'\) may not be reduced, but it provides an upper bound on the size of a CBDD relative to that of a CZDD.
This bound is tight—Fig. 5 illustrates the reduced representations for a family of functions, parameterized by a value k (\(k=3\) in the example), such that the function is defined over \(3k+2\) variables. The ZDD and CZDD representations are identical (A), having \(2k+3\) nodes (including both leaf nodes.) The CBDD representation has \(6k+2\) nodes (B). We can see in this example that the CBDD requires nodes (shown in gray) to encode the zero chains that are implicit in the ZDD.
Second, let us consider (6), bounding the relative sizes of the CZDD and BDD representations of a function. Consider a graph G representing function f as a BDD. We can construct its representation \(G'\) as a CZDD. Consider each edge G from node u, having level \(l_u\) to node v, having level \(l_v\). Let \(r = { lo}(v)\) and \(s = { hi}(v)\). \(G'\) has a node \(w_{uv}\) of the form . That is, \(w_{uv}\) encodes any don’tcare chain between u and v, and it has edges to the nodes generated to encode the edges between v and its two children. The size of \(G'\) is bounded by the number of edges in G, which is at most twice the number of nodes.
This bound is also tight—Fig. 6 illustrates the reduced representations for a family of functions, parameterized by a value k (\(k=3\) in the example), such that the function is defined over \(2k+1\) variables. The BDD representations (A) has \(2k+3\) nodes (including both leaf nodes). The CZDD representation has \(4k+3\) nodes (B). We can see that most of the nodes in the BDD must be duplicated: once with no incoming don’tcare chain, and once with a chain of length one.
As can be seen in Fig. 1, these bounds contain an asymmetry between BDDs and ZDDs and their compressed forms. The bound of 3 holds between CBDDs and CZDDs, and hence by transitivity between CBDDs and ZDDs, while the bound of 2 holds only between CZDDs and BDDs. The general form of the or chain (Fig. 3(A)), where g is something other than \(\mathbf{0}\), cannot be directly encoded with CZDD nodes.
6 Operating on CBDDs and CZDDs
The apply algorithms for decision diagrams operate by recursively expanding a set of argument decision diagrams according to a Shannon expansion of the represented functions [4, 5]. These algorithms allow functions to be combined according to standard binary Boolean operations, as well as by the ifthenelse operation \(\text{ ITE }\).
As notation, consider a step that expands k argument nodes \(\{v_i  1 \le i \le k\}\) where . For example, operations \(\text{ and }\), \(\text{ or }\), and \(\text{ xor }\) use the apply algorithm with \(k=2\), while ternary operations, such as \(\text{ ITE }\) use \(k=3\). A step can be summarized as follows:

1.
If one of the terminal cases apply, then return the result.

2.
If the computed cache contains an entry for this combination of operation and arguments, then return the previously computed result.

3.
Recursively compute the result:

(a)
Choose splitting level(s) based on the levels of the arguments.

(b)
Generate \({ hi}\) and \({ lo}\) cofactors for each argument.

(c)
Recursively compute the \({ hi}\) and \({ lo}\) values of the result using the apply algorithm with the \({ hi}\) cofactors and the \({ lo}\) cofactors, respectively.

(d)
Determine the result node parameters based on the computed \({ hi}\) and \({ lo}\) cofactors, the splitting level(s), and the reduction rules.

(e)
Either reuse an existing node or create a new one with the desired level(s) and \({ hi}\) and \({ lo}\) children.

(a)

4.
Store an entry in the computed cache.

5.
Return the computed value.
In generalizing from conventional BDDs and ZDDs to their chained versions, we need only modify 3(a) (splitting), 3(b) (cofactoring), and 3(d) (combining) in this sequence. In the following presentation, we first give formal definitions and then provide brief explanations.
For CBDDs, we define the splitting levels t and b as:
We then define the two cofactors for each argument node \(v_i\), denoted and , according to the following table:
Case  Condition 



1  \(b < t_i\)  \(v_i\)  \(v_i\) 
2  \(b = b_i\)  \(f_i\)  \(g_i\) 
3  \(t_i \le b < b_i\) 
 \(g_i\) 
These three cases can be explained as follows:
 Case 1::

Splitting spans levels less than the top level of \(v_i\). Since levelskipping edges encode don’tcare chains, both cofactors equal the original node.
 Case 2::

Splitting spans the same levels as node \(v_i\). The cofactors are therefore the nodes given by the outgoing edges.
 Case 3::

Splitting spans a subset of the levels covered by node \(v_i\). We construct a new node spanning the remaining part of the encoded or chain for the \({ lo}\) cofactor and have \(g_i\) as the \({ hi}\) cofactor.
Recursive application of the apply operation on the cofactors generates a pair of nodes \(u_{0}\) and \(u_{1}\). Using the variable levels t and b defined in (8), these are combined to form a result node u, defined as follows:
These three cases can be explained as follows:
 Case 1::

The \({ hi}\) and \({ lo}\) cofactors are identical, and so the don’tcare reduction rule can be applied.
 Case 2::

Chain compression can be applied to create a node that absorbs the \({ lo}\) cofactor.
 Case 3::

No special rules apply.
Similar rules hold for applying operations to CZDDs, although there are important differences, due to the different interpretations of levelskipping edges.
We define the splitting levels t and b as:
The cofactors for argument node \(v_i\) are defined according to the following table:
Case  Condition 



1  \(b < t_i\)  \(v_i\)  \(\mathbf{0}\) 
2  \(b = b_i\)  \(f_i\)  \(g_i\) 
3  \(t_i \le b < b_i\) 


These three cases can be explained as follows:
 Case 1::

The splitting spans levels less than the top level of \(v_i\). Since levelskipping edges encode zero chains, the \({ lo}\) cofactor equals the original node and the \({ hi}\) cofactor equals leaf \(\mathbf{0}\).
 Case 2::

The splitting spans the same levels as node \(v_i\). The cofactors are therefore the nodes given by the outgoing edges.
 Case 3::

The splitting spans a subset of the levels covered by node \(v_i\). We construct a new node spanning the remaining part of the encoded don’tcare chain for both cofactors.
Recursive application of the apply operation on the cofactors generates a pair of nodes \(u_{0}\) and \(u_{1}\). Using the variable ranges t and b defined in (10), these are combined to form a result node u, defined as follows:
These four cases can be explained as follows:
 Case 1::

The zerosuppression rule can be applied to return a direct pointer to \(u_0\)
 Case 2::

The zerosuppression rule can be applied, but we must construct a node encoding the don’tcare chain between levels t and \(b1\).
 Case 3::

Chain compression can be applied to create a node that absorbs the \({ lo}\) cofactor.
 Case 4::

No special rule applies.
7 Experimental Results
We implemented both CBDDs and CZDDs by modifying version 3.0.0 of the CUDD BDD package [14]. When compiled for 64bit execution, CUDD stores a 32bit field index in each node, where this index is translated into a level according to the variable ordering. For our implementation, we split this field into two 16bit fields index and bindex to (indirectly) encode the top and bottom levels of the node. Thus, there was no storage penalty for the generalization to a chained form.
CUDD uses complement edges when representing BDDs [2, 13]. Complement edges can potentially reduce the size of a BDD by a factor of two, invalidating the size ratio bounds derived in (5) and (6). For our experimental results, we therefore use a representation based on CUDD’s support for Algebraic Decision Diagrams (ADDs) [1]. ADDs generalize BDDs by allowing arbitrary leaf values. Restricting the leaf values to 0 and 1 yields conventional BDDs without complement edges.
To evaluate the effectiveness of chain reduction, we chose three different categories of benchmarks to compare the performance of BDDs, ZDDs, and their chained versions. One set of benchmarks evaluated the ability of DDs to represent information in compact form, a second to evaluate their ability to solve combinatorial problems, and a third to represent typical digital logic functions. All experiments were performed on a 4.2 GHz Intel Core i7 processor with 32 GB of memory running the OS X operating system.
7.1 Encoding a Dictionary
As has been observed [9], a list of words can be encoded as a function mapping strings in some alphabet to either 1 (included in list) or 0 (not included in list). Strings can further be encoded as binary sequences by encoding each symbol as a sequence of bits, allowing the list to be represented as a Boolean function. We consider two possible encodings of the symbols, defining the radix r to be the number of possible symbols. A onehot encoding (also referred to as a “1ofN” encoding) requires r bits per symbol. Each symbol is assigned a unique position, and the symbol is represented with a one in this position and zeros in the rest. A binary encoding requires \(\lceil \log _2 r \rceil \) bits per symbol. Each symbol is assigned a unique binary pattern, and the symbol is represented by this pattern. Lists consisting of words with multiple lengths can be encoded by introducing a special “null” symbol to terminate each word.
Eight benchmarks were derived from two word lists to allow comparisons of different encoding techniques and representations. The first list is a set of English words in the file /usr/share/dict/words found on Macintosh systems. It contains 235,886 words with lengths ranging from one to 24 symbols, and where the symbols consist of lower and uppercase letters plus hyphen. We consider two resulting symbol sets: a compact form, consisting of just the symbols found in the words plus a null symbol (54 total), and an ASCII form, consisting of all 128 ASCII characters plus a null symbol. The second word list is from an online list of words employed by password crackers. It consists of 979,247 words ranging in length from one to 32 symbols, and where the symbols include 79 possible characters. Again, we consider both a compact form and an ASCII form. The choice of onehot vs. binary encoding has a major effect on the number of Boolean variables required to encode the words. With a onehot encoding, the number of variables ranges between 1,296 and 4,128, while it ranges between 144 and 256 with a binary representation. To generate DD encodings of a word list, we first constructed a trie representation the words and then generated Boolean formulas via a depthfirst traversal of the trie.
Figure 7 shows the number of nodes required to represent word lists as Boolean functions, according to the different lists, encodings, and DD types. The entry labeled “(C)ZDD” gives the node counts for both ZDDs and CZDDs. These are identical, because there were no don’tcare chains for these functions. The two columns on the right show the ratios between the different DD types. Concentrating first on onehot encodings, we see that the chain compression of CBDDs reduces the size compared to BDDs by large factors (15.50–34.03). Compared to ZDDs, representing the lists by CBDDs incurs some penalty (2.05–2.10), but less than the worstcase penalty of 3. Increasing the radix from a compact form to the full ASCII character set causes a significant increase in BDD size, but this effect is eliminated by using the zero suppression capabilities of CBDDs, ZDDs, and CZDDs.
Using a binary encoding of the symbols reduces the variances between the different encodings and DD types. CBDDs provide only a small benefit (1.11–1.23) over BDDs, and CBDDs are within a factor of 1.50 of ZDDs. Again, chaining of ZDDs provides no benefit. Observe that for both lists, the most efficient representation is to use either ZDDs or CZDDs with a onehot encoding. The next best is to use CBDDs with a onehot encoding, and all three of these are insensitive to changes in radix. These cases demonstrate the ability of ZDDs (and CZDDs) to use very large, sparse encodings of values. By using chaining, CBDDs can also take advantage of this property.
Although the final node counts for the benchmarks indicate no advantage of chaining for ZDDs, statistics characterizing the effort required to derive the functions show a significant benefit. Figure 8 indicates the total number of operations and the total time required for generating ZDD and CZDD representations of the benchmarks. The operations are computed as the number of times the program checks for an entry in the operation cache (step 2 in the description of the apply algorithm). There are many operational factors that can affect the number of operations, including the program’s policies for operation caching and garbage collection. Nevertheless, it is some indication of the amount of activity required to generate the DDs. We can see that chaining reduces the number of operations by factors of 8.87–13.30. The time required depends on many attributes of the DD package and the system hardware and software. Here we see that chaining improves the execution time by factors of 1.35–15.26.
With unchained ZDDs, many of the intermediate functions have large don’tcare chains. For example, the ZDD representation of the function x, for variable x, requires \(n+2\) nodes—one for the variable, \(n1\) for the don’tcare chains before and after the variable, and two leaf nodes. With chaining, this function reduces to just four nodes: the upper don’tcare chain is incorporated into the node for the variable, and a second node encodes the lower chain. Our dictionary benchmarks have over 4,000 variables, and so some of the intermediate DDs can be more than 1,000 times more compact due to chaining.
7.2 The 15Queens Problem
A second set of benchmarks involved representing all possible solutions to the nqueens problem [12] as a Boolean function. This problem attempts to place n queens on a \(n\times n\) chessboard in such a way that no two queens can attack each other. For our benchmark we chose \(n = 15\) to stay within the memory limit of the processor being used.
Once again, there are two choices for encoding the positions of queens on the board. A onehot encoding uses a Boolean variable for each square. A binary encoding uses \(\lceil \log _2 n \rceil = 4\) variables for each row, encoding the position of the queen within the row.
Our most successful approach for encoding the constraints with Boolean operations worked from the bottom row to the top. At each level, it generated formulas for each column and diagonal expressing whether it was occupied in the rows at or below this one, based on the formulas for the level below and the variables for the present row.
We considered two ways of ordering the variables for the different rows. The topdown ordering listed the variables according to the row numbers 1 through 15. The centerfirst ordering listed variables according to the following row number sequence:
Our hope was that ordering the center rows first would reduce the DD representation size. This proved not to be the case, but the resulting node counts are instructive.
Figure 9 shows the node counts for the different benchmarks. It shows both the size of the final function representing all solutions to the 15queens problem, as well as the peak size, computed as the maximum across all rows of the combined size of the functions that are maintained to express the constraints imposed by the row and those below it. For both the topdown and the centerfirst benchmarks, this maximum was reached after completing row 3. Typically the peak size was around three times larger than the final size.
For a onehot encoding, we can see that CBDDs achieve factors of 4.18–5.16 compaction over BDDs, and they come within a factor of 2.20 of CZDDs. For a binary encoding, the levels of compaction are much less compelling (1.13–1.20), as is the advantage of CZDDs over BDDs. It is worth noting that the combination of a onehot encoding and chaining gives lower peak and final sizes than BDDs with a binary encoding.
Figure 10 compares the sizes of the ZDD and CZDD representations of the 15queens functions. We can see that the final sizes are identical—there are no don’tcare chains in the functions encoding problem solutions. For the topdown ordering, CZDDs also offer only a small advantage for the peak requirement. For the centerfirst ordering, especially with a onehot encoding, however, we can see that CZDDs are significantly (3.81\(\times \)) more compact. As the construction for row 3 completes, the variables that will encode the constraints for rows 2 and 5 remain unconstrained, yielding many don’tcare chains. Once again, this phenomenon is much smaller with a binary encoding.
7.3 Digital Circuits
BDDs are commonly used in digital circuit design automation, for such tasks as verification, test generation, and circuit synthesis. We selected the circuits in the ISCAS ’85 benchmark suite [3]. These were originally developed as benchmarks for test generation, but they have also been widely used as benchmarks for BDDs [7, 10]. We generated variable orderings for all but last two benchmarks by traversing the circuit graphs, using the fanin heuristic of [10]. Circuit c6288 implements a \(16 \times 16\) multiplier. For this circuit, the ordering of inputs listed in the file provided a feasible variable ordering, while the one generated by traversing the circuit exceeded the memory limits of our machine. For c7552, neither the ordering in the file, nor that provided by traversing the graph, generated a feasible order. Instead, we manually generated an ordering based on our analysis of a reverseengineered version of the circuit described in [8]
Figure 11 presents data on the sizes of the DDs to represent all of the circuit outputs. We do not present any data for CBDDs, since these were all close in size to BDDs. We can see that the ZDD representations for these circuits are always larger than the BDD representations, by factors ranging up to 8.31. Using CZDDs mitigates that effect, yielding a maximum size ratio of 1.38.
8 Observations
Our experiments, while not comprehensive, demonstrate that chaining can allow BDDs to make use of large, sparse encodings, one of the main strengths of ZDDs. They also indicate that CZDDs may be the best choice overall. CZDDs have all of the advantages of ZDDs, while avoiding the risk of intermediate functions growing excessively large due to don’tcare chains. They are guaranteed to stay within a factor of 2\(\times \) of BDDs. Even for digital circuit functions, we found this bound to be conservative—all of the benchmarks stayed within a factor of 1.4\(\times \).
References
Bahar, R.I., Frohm, E.A., Gaona, C.M., Hachtel, G.D., Macii, E., Pardo, A., Somenzi, F.: Algebraic decision diagrams and their applications. In: Proceedings of the International Conference on ComputerAided Design, pp. 188–191, November 1993
Brace, K.S., Rudell, R.L., Bryant, R.E.: Efficient implementation of a BDD package. In: Proceedings of the 27th ACM/IEEE Design Automation Conference, pp. 40–45, June 1990
Brglez, F., Fujiwara, H.: A neutral netlist of 10 combinational benchmark circuits and a target translator in Fortran. In: 1985 International Symposium on Circuits And Systems (1985)
Bryant, R.E.: Graphbased algorithms for Boolean function manipulation. IEEE Trans. Comput. C35(8), 677–691 (1986)
Bryant, R.E.: Binary decision diagrams. In: Clarke, E.M., Henzinger, T.A., Veith, H., Bloem, R. (eds.) Handbook of Model Checking. Springer, Heidelberg (2017). https://doi.org/10.1007/9783319105758
van Dijk, T., Wille, R., Meolic, R.: Tagged BDDs: combining reduction rules from different decision diagram types. In: Formal Methods in ComputerAided Design, pp. 108–115 (2017)
Fujita, M., Fujisawa, H., Kawato, N.: Evaluation and improvements of Boolean comparison method based on binary decision diagrams. In: Proceedings of the International Conference on ComputerAided Design, pp. 2–5, November 1988
Hansen, M., Yalcin, H., Hayes, J.P.: Unveiling the ISCAS85 benchmarks: a case study in reverse engineering. IEEE Des. Test 16(3), 72–80 (1999)
Knuth, D.E.: The Art of Computer Programming: Combinatorial Algorithms, Part I, vol. 4A. Addison Wesley, Reading (2011)
Malik, S., Wang, A., Brayton, R.K., SangiovanniVincentelli, A.L.: Logic verification using binary decision diagrams in a logic synthesis environment. In: Proceedings of the International Conference on ComputerAided Design, pp. 6–9, November 1988
Minato, S.: Zerosuppressed BDDs for set manipulation in combinatorial problems. In: Proceedings of the 30th ACM/IEEE Design Automation Conference, pp. 272–277, June 1993
Minato, S.: Binary Decision Diagrams and Applications for VLSI CAD. Kluwer Academic Publishers, Norwell (1995)
Minato, S., Ishiura, N., Yajima, S.: Shared binary decision diagrams with attributed edges for efficient Boolean function manipulation. In: Proceedings of the 27th ACM/IEEE Design Automation Conference, pp. 52–57, June 1990
Somenzi, F.: Efficient manipulation of decision diagrams. Int. J. Softw. Tools Technol. Transf. 3(2), 171–181 (2001)
Acknowledgements
This work was supported, in part, by NSF STARSS grant 1525527.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
This chapter is published under an open access license. Please check the 'Copyright Information' section either on this page or in the PDF for details of this license and what reuse is permitted. If your intended use exceeds what is permitted by the license or if you are unable to locate the licence and reuse information, please contact the Rights and Permissions team.
Copyright information
© 2018 The Author(s)
About this paper
Cite this paper
Bryant, R.E. (2018). Chain Reduction for Binary and ZeroSuppressed Decision Diagrams. In: Beyer, D., Huisman, M. (eds) Tools and Algorithms for the Construction and Analysis of Systems. TACAS 2018. Lecture Notes in Computer Science(), vol 10805. Springer, Cham. https://doi.org/10.1007/9783319899602_5
Download citation
DOI: https://doi.org/10.1007/9783319899602_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 9783319899596
Online ISBN: 9783319899602
eBook Packages: Computer ScienceComputer Science (R0)