Saturated Cost Partitioning for Optimal Classical Planning
Authors: Jendrik Seipp, Thomas Keller, Malte Helmert
JAIR 2020 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our empirical evaluation of using multiple orders shows a significant improvement over single-order heuristics. Our empirical evaluation of using multiple orders shows a significant improvement over single-order heuristics. We use all 1827 tasks without conditional effects from the optimization tracks of the 1998 2018 International Planning Competitions (IPC) and limit time to 30 minutes and memory to 3.5 Gi B. Table 3: Number of solved tasks by saturated cost partitioning using different ordering algorithms and optimization time limits. Table 4: Number of solved tasks when maximizing over saturated cost partitioning heuristics for N unoptimized and optimized greedy orders. Table 5: Number of solved tasks by diverse saturated cost partitioning heuristics for unoptimized greedy orders using different diversification time limits. Table 6: Coverage comparison of saturated cost partitioning heuristics diversified for 1000 seconds using different optimization time limits. Table 7: Number of solved tasks by different algorithms. Table 8: Overall and per-domain comparison of h SCP div to some of the strongest admissible heuristics from the literature. |
| Researcher Affiliation | Academia | Jendrik Seipp EMAIL Thomas Keller EMAIL Malte Helmert EMAIL University of Basel Basel, Switzerland |
| Pseudocode | Yes | Algorithm 1 Interleaved saturated cost partitioning algorithm. Given a weighted transition system, it computes a set of abstraction heuristics and corresponding minimum saturated cost functions that form a cost partitioning. Algorithm 2 Dynamic greedy ordering algorithm. Given a set of admissible heuristics H with corresponding saturators, a cost function cost, a state s and a scoring function q, it computes a dynamic greedy order by iteratively appending the heuristic with the highest score and updating the estimates and saturated costs for each unordered heuristic. Algorithm 3 Static greedy ordering algorithm. Given a set of admissible heuristics H, a cost function cost, a state s and a scoring function q, it sorts the heuristics by their respective scores in descending order. |
| Open Source Code | Yes | All benchmarks3, code4 and experimental data5 have been published online. 4. Code: https://doi.org/10.5281/zenodo.3497367 |
| Open Datasets | Yes | We use all 1827 tasks without conditional effects from the optimization tracks of the 1998 2018 International Planning Competitions (IPC) and limit time to 30 minutes and memory to 3.5 Gi B. All benchmarks3, code4 and experimental data5 have been published online. 3. Benchmarks: https://doi.org/10.5281/zenodo.2616479 5. Experimental data: https://doi.org/10.5281/zenodo.3497396 |
| Dataset Splits | No | The paper uses a set of planning tasks from IPC competitions, but does not specify any training/testing/validation splits for these tasks, as they are typically treated as individual problem instances. |
| Hardware Specification | No | The paper states resource limits for tasks (time to 30 minutes and memory to 3.5 Gi B.) but does not provide specific details about the CPU, GPU, or other hardware used for the experiments. |
| Software Dependencies | No | All algorithms are implemented in the Fast Downward planning system (Helmert, 2006) and we use the Downward Lab toolkit (Seipp, Pommerening, Sievers, & Helmert, 2017) to conduct experiments. No specific version numbers for these systems or any other software dependencies are provided. |
| Experiment Setup | Yes | We use all 1827 tasks without conditional effects from the optimization tracks of the 1998 2018 International Planning Competitions (IPC) and limit time to 30 minutes and memory to 3.5 Gi B. As our set of heuristics we use the combination of pattern databases found by hill climbing in the space of pattern collections (Haslum et al., 2007), systematic pattern databases of sizes 1 and 2 (Pommerening et al., 2013) and Cartesian abstractions of landmark and goal task decompositions (Seipp & Helmert, 2018). Table 3 compares the number of solved tasks for a random (random-opt-Xs) and (static) greedy (greedy-opt-Xs) initial order that is optimized with the proposed simple hill climbing algorithm using different time limits. The optimization time limit does not include the time for computing the final cost partitioning. The setting that yields the strongest heuristic uses 100 seconds of hill-climbing optimization. |