An Exhaustive DPLL Algorithm for Model Counting

Authors: Umut Oztok, Adnan Darwiche

JAIR 2018 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Still, our experimental analysis shows comparable results against state-of-the-art model counters. Furthermore, we obtain the first top-down SDD compiler, and show orders-of-magnitude improvements in SDD construction time against the existing bottom-up SDD compiler.
Researcher Affiliation Academia Umut Oztok EMAIL Adnan Darwiche EMAIL Computer Science Department University of California, Los Angeles Los Angeles, CA 90095
Pseudocode Yes Algorithm 1: SAT() Input: : a CNF Output: if is satisfiable; otherwise Algorithm 2: SAT(S) Input: S : a SAT state ( , Γ, D, I) Output: success or Algorithm 3: #SAT(π, S) Input: π : a variable order, S : a SAT state ( , Γ, D, I) Output: Model count of D, or a clause Algorithm 4: #SAT(v, S) Input: v : a vtree node, S : a SAT state ( , Γ, D, I) Output: A model count or a clause Algorithm 5: c2s(v, S) Input: v : a vtree node, S : a SAT state ( , Γ, D, I) Output: A Decision-SDD and its negation or a clause
Open Source Code Yes The source code of our system, mini C2D, is available at http://reasoning.cs.ucla.edu/minic2d.
Open Datasets Yes In our experiments we used 1392 CNFs available at http://www.cril.univ-artois.fr/PMC/pmc.html, which come from different applications such as planning and product configuration. We also used some CNFs from the iscas85, iscas89, and LGSynth89 suites, which consist of sequential and combinatorial circuits used in the CAD community.
Dataset Splits No The paper does not describe dataset splits for training, validation, or testing. It evaluates performance on a collection of CNF instances without specifying such splits, as the task is model counting and compilation rather than a typical machine learning task with data splitting.
Hardware Specification Yes All experiments were performed on a 2.6GHz Intel Xeon X5650 CPU under 1 hour of time limit and with access to 8GB RAM. ... All experiments were performed on a 2.6GHz Intel Xeon E5-2670 CPU under 1 hour of time limit and with access to 50GB RAM.
Software Dependencies No We will now present an empirical evaluation of our model counter mini C2D, which is a C implementation of Algorithm 4. ... In our experiments, we used two sets of benchmarks. First, we used some CNFs from the iscas85, iscas89, and LGSynth89 suites, which consist of sequential and combinatorial circuits used in the CAD community. ... To compile SDDs, we used the SDD package (Choi & Darwiche, 2013a). The paper mentions software like 'C' for implementation and 'SDD package' as a tool, but does not specify version numbers for these, nor for the other comparative systems like 'cachet', 'c2d', or 'sharp SAT'.
Experiment Setup No All experiments were performed on a 2.6GHz Intel Xeon X5650 CPU under 1 hour of time limit and with access to 8GB RAM. ... All experiments were performed on a 2.6GHz Intel Xeon E5-2670 CPU under 1 hour of time limit and with access to 50GB RAM. While the paper specifies the hardware and time limits for experiments, it does not provide hyperparameters or specific configuration details (e.g., learning rates, batch sizes, optimizer settings) that are typically associated with an 'experimental setup' in machine learning, as this paper focuses on algorithms for model counting and knowledge compilation.