Journal of Automated Reasoning

, Volume 58, Issue 4, pp 413–481

On Preprocessing Techniques and Their Impact on Propositional Model Counting


DOI: 10.1007/s10817-016-9370-8

Cite this article as:
Lagniez, JM. & Marquis, P. J Autom Reasoning (2017) 58: 413. doi:10.1007/s10817-016-9370-8


This paper is concerned with preprocessing techniques for propositional model counting. We have considered several elementary preprocessing techniques: backbone identification, occurrence reduction, vivification, as well as equivalence, AND and XOR gate identification and replacement. All those techniques have been implemented in a preprocessor pmc, freely available on the Web. In order to assess the benefits which can be gained by taking advantage of pmc, we performed many experiments, based on benchmarks coming from several data sets. More precisely, we made a differential evaluation of each elementary preprocessing technique in order to evaluate its impact on the number of variables of the instance, its size, as well as the treewidth of its primal graph. We also considered two combinations of preprocessings: \( eq \), based on equivalence-preserving techniques only, and \({\#eq}\), which additionally exploits techniques preserving only the number of models. Several approaches to model counting have also been considered downstream in our experiments: “direct” model counters, including the exact ones Cachet, sharpSAT, and an approximate one SampleCount, as well as the compilation-based model counters C2D, Dsharp, SDD and cnf2obdd have been used. The experimental results we have obtained show that each elementary preprocessing technique is useful, and that some synergetic effects can be achieved by combining them.


Propositional model counting #SAT Preprocessing Knowledge compilation 

Copyright information

© Springer Science+Business Media Dordrecht 2016

Authors and Affiliations

  1. 1.CRIL-CNRS and Université d’ArtoisLensFrance

Personalised recommendations