What Makes You Special? Contrastive Heuristics Based on Qualified Dominance

Authors: Rasmus G. Tollund, Kim G. Larsen, Alvaro Torralba

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments show that our qualified dominance techniques are able to find information across many tasks, even though this is not very complementary with highly informative heuristics. We implemented our approach on top of Fast Downward [Helmert, 2006], as a constraint generation method for operator counting heuristics. We run experiments with Lab [Seipp et al., 2017] on AMD EPYC 7551 CPUs with memory/time cut-offs of 4 GBs and 30 minutes. Fig. 3 compares the expansions compared to dominance pruning with the same heuristic. We see a reasonable decrease of expansions in many tasks (e.g., of a factor of 2). Therefore, this can potentially be used for improving heuristic estimates.
Researcher Affiliation Academia Rasmus G. Tollund , Kim G. Larsen , Alvaro Torralba Aalborg University, Aalborg, Denmark EMAIL, EMAIL, EMAIL
Pseudocode Yes Algorithm 1: Qualified Dominance Heuristic
Open Source Code No Code and experiment data will be made available upon publication.
Open Datasets Yes We use the Autoscale benchmark set [Torralba et al., 2021], consisting of 42 domains with 30 tasks in each. All tasks are automatically transformed into deterministic FTS tasks [Helmert, 2009; Sievers and Helmert, 2021].
Dataset Splits No The paper mentions using the 'Autoscale benchmark set [Torralba et al., 2021]' consisting of '42 domains with 30 tasks in each', but does not specify how these tasks are split into training, validation, or test sets for their experiments, nor does it refer to predefined splits used for evaluation.
Hardware Specification Yes We run experiments with Lab [Seipp et al., 2017] on AMD EPYC 7551 CPUs with memory/time cut-offs of 4 GBs and 30 minutes.
Software Dependencies No The paper mentions implementing its approach on "Fast Downward [Helmert, 2006]" and running experiments with "Lab [Seipp et al., 2017]", but it does not specify explicit version numbers for these or any other ancillary software components used.
Experiment Setup No The paper describes methods such as "constraint generation method for operator counting heuristics" and combining them with "LM-cut [Helmert and Domshlak, 2009]" and "flow constraints". However, it does not provide specific hyperparameters (e.g., learning rates, batch sizes, epochs) or detailed training configurations for these methods.