Counterexample-Guided Cartesian Abstraction Refinement for Classical Planning
Authors: Jendrik Seipp, Malte Helmert
JAIR 2018 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate Cartesian abstractions and saturated cost partitioning with and without these diversification strategies on the benchmark collection of previous International Planning Competitions. Our results show that heuristics based on a single Cartesian abstraction are able to achieve competitive performance only in a few domains. However, constructing multiple abstractions in general and using landmarks to diversify the heuristics in particular leads to a significantly higher number of solved tasks and makes heuristics based on Cartesian abstractions outperform other state-of-the-art abstraction heuristics in many domains. |
| Researcher Affiliation | Academia | Jendrik Seipp EMAIL Malte Helmert EMAIL University of Basel Basel, Switzerland |
| Pseudocode | Yes | Algorithm 1 Main loop. For a given planning task, returns a plan, proves that no plan exists, or returns an abstraction of the task (for example to be used as the basis of a heuristic function). All algorithms operate on the planning task Π = V, O, s0, s with V = v1, . . . , vn . |
| Open Source Code | Yes | Our code, benchmarks and experimental data are available online.1 1. Code: https://doi.org/10.5281/zenodo.1240992 Benchmarks: https://doi.org/10.5281/zenodo.1205019 Experimental data: https://doi.org/10.5281/zenodo.1240998 |
| Open Datasets | Yes | Our benchmark set consists of all 1667 tasks from the optimization tracks of the 1998 2014 International Planning Competitions. All tasks are given in the PDDL format (Fox & Long, 2003)... Benchmarks: https://doi.org/10.5281/zenodo.1205019 |
| Dataset Splits | No | Our benchmark set consists of all 1667 tasks from the optimization tracks of the 1998 2014 International Planning Competitions. The paper does not specify any training/test/validation splits for these tasks; each task is treated as a separate problem instance for evaluation. |
| Hardware Specification | No | The paper does not provide specific hardware details for running its experiments. It only mentions a time limit of 30 minutes and a memory limit of 2 GiB for algorithm runs, but no specific hardware specifications. |
| Software Dependencies | No | We implemented our counterexample-guided Cartesian abstraction refinement and saturated cost partitioning algorithms in the Fast Downward planning system (Helmert, 2006). For running experiments, we use the Downward Lab toolkit (Seipp, Pommerening, Sievers, & Helmert, 2017c). This mentions the software systems used but does not provide specific version numbers for them. |
| Experiment Setup | Yes | We apply a time limit of 30 minutes and a memory limit of 2 Gi B to all algorithm runs and let all versions that use CEGAR refine for at most 15 minutes. We also stop refining and start the A search when the refinement loop is about to exhaust the 2 Gi B memory limit. For the CEGAR versions using multiple (additive) abstraction heuristics we distribute the refinement time equally among the abstractions. We let the saturated cost partitioning algorithm consider the subtasks in a randomized order. |