Continuing Plan Quality Optimisation

Authors: Fazlul Hasan Siddiqui, Patrik Haslum

JAIR 2015 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We present an approach to continuing plan quality optimisation at larger time scales, and its implementation in a system called BDPO2. ... Even starting from the best plans found by other means, BDPO2 is able to continue improving plan quality, often producing better plans than other anytime planners when all are given enough runtime. ... (A full description of the experiment setup, and results for even more anytime planners, is presented in Section 3, from page 392.) Figure 2: Average IPC quality score as a function of time per problem, on a set of 182 large-scale planning problems.
Researcher Affiliation Academia Fazlul Hasan Siddiqui EMAIL Patrik Haslum EMAIL The Australian National University & NICTA Optimisation Research Group Canberra, Australia
Pseudocode Yes Algorithm 1 Resolve ordering constraints between a pair of blocks. ... Algorithm 2 The neighbourhood exploration procedure in BPDO2. ... Algorithm 3 Merge Improved Windows ... Algorithm 4 Computing extended blocks.
Open Source Code Yes The source code for BDPO2 is provided as an on-line appendix to this article.
Open Datasets Yes For experiment setup 2 and 3 we used 182 large-scale instances from 21 IPC domains. ... We used all domains from the sequential satisficing track of the 2008, 2011, and 2014 IPC, except for the Cyber Sec, Cave Diving and City Car domains. ... We also used the Alarm Processing for Power Networks (APPN) domain (Haslum & Grastien, 2011). ... Genome Edit Distance (GED) domain (Haslum, 2011).
Dataset Splits No For experiments 2 and 3, we selected from each domain the 10 last instances for which a base plan exists. (In some domains less than 10 instances are solved by LAMA/IBa Co P2, which is why the total is 182 rather than 210.) For domains that appeared in more than one competition, we used instances only from the IPC 2011 set.
Hardware Specification Yes All experiments were run on 6-Core, 3.1GHz AMD CPUs with 6M L2 cache, with an 8 GB memory limit for every system.
Software Dependencies No The paper mentions several planners and tools used in comparison or as subplanners (e.g., LAMA (Richter & Westphal, 2010, IPC 2011 version), AEES (implemented in the Fast Downward code base), IBCS, PNGS, IBa Co P2 (Cenamor et al., 2014), LPG (Gerevini & Serina, 2002), Arvand (Nakhost & M uller, 2009)). However, it does not provide specific version numbers for ancillary software dependencies required to build or run BDPO2 itself (e.g., specific Fast Downward version, or other libraries).
Experiment Setup Yes We have used a limit of 15 seconds, increasing by another 15 seconds for each retry. ... The threshold we have used for switching the ranking policy is 13. ... The limits we have used are 120 seconds and 20 windows, respectively. ... Our implementation of BSS does not use divide-and-conquer solution reconstruction, and was run with a beam width of 500. ... Each system was run for up to 7 hours CPU time per problem. ... we allocated 1 hour CPU time for generating each base plan.