Improving and Understanding the Power of Satisfaction-Driven Clause Learning

Authors: Albert Oliveras, Chunxiao Li, Darryl Wu, Jonathan Chung, Vijay Ganesh

JAIR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental A thorough empirical evaluation of an implementation on the Maple SAT solver shows that the resulting system solves Mutilated Chess Board (MCB) problems significantly faster than CDCL solvers... Our experimental evaluation shows that Maple SDCL performs well on mutilated chess board (MCB) and bipartite perfect matching problems... We have evaluated our system on the benchmarks used in [14, 26].
Researcher Affiliation Collaboration Albert Oliveras, orcid: 0000-0002-5893-1911, EMAIL, Technical University of Catalonia, Barcelona, Spain; Chunxiao Li, EMAIL, University of Waterloo, Waterloo, Canada; Darryl Wu, EMAIL, University of Waterloo, Waterloo, Canada; Jonathan Chung, orcid: 0000-0001-5378-1136, EMAIL, Lorica Cybersecurity, Canada; Vijay Ganesh, orcid: 0000-0002-6029-2047, EMAIL, Georgia Institute of Technology, Atlanta, Georgia, USA.
Pseudocode Yes Algorithm 1: The SDCL algorithm. Note that removing lines 9-12 results in the CDCL algorithm. Algorithm 2: analyzeWitness procedure using Max-SAT based minimization for the positive reduct p_alpha(F).
Open Source Code No The paper mentions that they implemented SDCL on top of Maple SAT and used Eval Max SAT, and that Maple SDCL produces proofs checkable by dpr-trim, providing a link to dpr-trim. However, it does not provide any specific link or explicit statement about the open-source availability of their own Maple SDCL implementation.
Open Datasets Yes We have evaluated our system on the benchmarks used in [14, 26]. The 10 instances of the form mchess-n represent mutilated chess board problems of size n... The 4 instances of the form rnd-* encode the problem of finding a perfect matching in a randomly generated bipartite graph [9]... Additionally, a certain percentage p of a set of symmetry-breaking constraints are added [10].
Dataset Splits No The paper describes using specific 'instances' or 'benchmarks' for evaluation, such as 'mchess-n' problems and 'rnd-*' problems, rather than traditional training/validation/test splits of a dataset. These are problem instances for SAT solvers, not datasets with defined splits in the machine learning sense.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU, GPU models, memory) used for running the experiments. It only discusses the computational cost of certain components without specifying the underlying hardware.
Software Dependencies No The paper mentions several software components like Maple SAT [21], Eval Max SAT [2], Kissat [3], CASHWMax SAT-Disj Cad, UWr Max Sat, and Max CDCL. It also states for Eval Max SAT, 'we used the version submitted to the evaluation as well as a version compiled as a library (Eval LIB in the table)'. However, it does not provide specific version numbers for any of these software dependencies, only references to years of evaluations.
Experiment Setup Yes Our strategy relies on having a variable dl_call that corresponds to the decision level at which lines 9-12 of Algorithm 1 are executed... If this ratio is lower than a certain amount (15% in our implementation), we increment dl_call by one; if it is higher, we decrement it by one. The second refinement is that our final asserting clause is not always shorter than the clause obtained by negating all decisions... A final refinement consists of only learning clauses of size at most 3... For Sa Di Ca L, we used two versions, one using the positive reduct and one using the filtered positive reduct. Regarding Maple SDCL, we present four configurations...