Generalizing Constraint Models in Constraint Acquisition

Authors: Dimos Tsouros, Senne Berden, Steven Prestwich, Tias Guns

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our empirical results demonstrate that our approach achieves high accuracy and is robust to noise in the input instances. We now experimentally evaluate GENCON, using ground CSPs of different instances on a variety of benchmarks. We evaluate our approach both when the given sets of constraints are correct and when noise exists.
Researcher Affiliation Academia 1Department of Computer Science, KU Leuven, Belgium 2School of Computer Science and Information Technology, University College Cork
Pseudocode Yes Algorithm 1: Extracting Constraint Specifications
Open Source Code Yes Code https://github.com/Dimosts/Gen Con Models
Open Datasets No Benchmarks. We focused on using benchmarks that have different constraint specifications so that our method is evaluated in distinct cases. Namely, we used the following benchmarks that are commonly used in CA: Sudoku, Golomb, Exam Timetabling (ET) and Nurse Rostering (NR). The paper mentions well-known benchmarks (Sudoku, Golomb, Exam Timetabling, Nurse Rostering) but does not provide specific access information like links, DOIs, or citations for the datasets themselves. It refers to these as 'commonly used in CA' which implies existing problem definitions, but no concrete access details for the actual data used in the experiments.
Dataset Splits Yes We employed a challenging variant of leave-one-out cross-validation, referred to as leave-one-in cross-validation: for each fold, we used just a single instance for training and the remaining nine instances for testing.
Hardware Specification Yes All experiments were conducted on a system with an Intel(R) Core(TM) i7-2600 CPU, 3.40GHz clock speed, with 16 GB of RAM.
Software Dependencies No We used the CPMpy library (Guns 2019) for constraint modeling, and the Scikit-Learn library (Pedregosa et al. 2011) for the ML classifiers, except CN2, for which the Orange library (Demˇsar et al. 2013) was used. The paper lists software libraries (CPMpy, Scikit-Learn, Orange) along with the years of their respective publications, but does not provide explicit version numbers for the specific versions of these libraries used in the experiments.
Experiment Setup Yes We used CN2, DT, RF, and NB with their default parameters and tuned the most important hyperparameters for MLP and KNN2.