Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]

Using Constraint Propagation to Bound Linear Programs

Authors: Tomáš Dlask, Tomáš Werner

JAIR 2024 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We newly apply it to the LP relaxation of the Weighted Max-SAT problem, experimentally showing that the obtained bounds are often not far from optima of the relaxation and proving that they are exact for known tractable subclasses of Weighted Max-SAT.
Researcher Affiliation Academia Machine Learning Group, Department of Cybernetics, Faculty of Electrical Engineering, Czech Technical University in Prague
Pseudocode Yes Algorithm 1: Bounds propagation in system Ax b. Algorithm 2: Iterative scheme for approximate optimization of the dual (12). Algorithm 3: Approximate optimization of the dual (12) with gradually decreasing threshold θ.
Open Source Code No The paper does not contain any explicit statement about releasing source code for the methodology described.
Open Datasets Yes We used the Max-SAT Evaluations 2018 benchmark (Bacchus, J arvisalo, & Martins, 2019), which contains 2591 instances of Weighted Max-SAT. Instances available at https://maxsat-evaluations.github. io/.
Dataset Splits No The paper evaluates an optimization algorithm on a benchmark of Max-SAT instances. It categorizes instances by clause length and solvability by an external LP solver, but does not describe training, testing, or validation splits for its own methodology.
Hardware Specification Yes The experiments were performed on a laptop with i7-4710MQ CPU at 2.5 GHz, providing 8 threads to the concurrent solver, and 16GB RAM without any time limit.
Software Dependencies Yes We used Gurobi (Gurobi Optimization, LLC, 2020) version 7.5.2 with default parameters, which uses a concurrent solver.
Experiment Setup Yes We follow the general scheme outlined in Algorithm 3 where we initialize θ = w(C) = P c C wc and whenever the algorithm cannot detect infeasibility with the current θ, we keep the current y and update θ := θ/10. We continue until θ is not small enough (10 6). To improve performance, we also decrease θ whenever (D D )/(w(C) + D ) < 10 12 where D and D is the dual objective before and after the iteration, respectively.