Discovering Options That Minimize Average Planning Time

Authors: Alexander Ivanov, Akhil Bagaria, George Konidaris

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically evaluate our method on four discrete and two continuous control planning domains where it outperforms other leading option discovery algorithms. ... We test both algorithms on six control problems and show that they outperform other option discovery algorithms (Jinnai et al. 2019b; Machado et al. 2018) with a strong theoretical basis. ... 4 Empirical Results
Researcher Affiliation Collaboration Alexander Ivanov1, Akhil Bagaria2* , George Konidaris1 1 Brown University, Providence, RI, USA 2 Amazon, New York, NY, USA alexander EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes the algorithm steps in numbered lists within Section 3.1, but these are descriptive instructions rather than structured pseudocode blocks formatted like code.
Open Source Code No The paper does not contain an explicit statement about releasing source code for the described methodology, nor does it provide a link to a code repository.
Open Datasets Yes We qualitatively evaluate our algorithms on two discrete grid-world domains: 9x9 grid and Four-Rooms (Sutton, Precup, and Singh 1999). ... We compare Fast Average Options to Covering Options and Eigen Options on Ant U-maze (Fu et al. 2020) and Fetch-Reach (Plappert et al. 2018).
Dataset Splits Yes In Ant U-maze, ... error bands are standard error over 156 start-goal pairs; lower is better. ... In Figures 5 and 6, we see that Fast Average Options outperforms both Covering Options and Eigen Options. We also note that Average Options provides a greater improvement ... error bands are standard error over 81 start-goal pairs; lower is better.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., CPU, GPU models) used to run the experiments.
Software Dependencies No The paper does not provide specific software dependencies or library versions used in the experiments.
Experiment Setup Yes For Average Options we use a kmedian with penalties subroutine with an approximation ratio of α = 5 for improved run-time (Arya et al. 2001). ... To use our method in continuous domains, we first discretize the state-space using a set covering ... Then we build a graph on the set covering by joining states within distance ϵ of each other or if they are k-nearest neighbors ... The radius of the ϵ-balls used for the set cover is the discretization factor, ρ. ... Figure 3: Discretized graph on Ant U-Maze with ρ = 6.5, ϵ = 0.5, k = 10. ... For each method, 4, 8, and 16 options are constructed using the discretized graph ... To quantitatively evaluating all methods in continuous domains, we use Random Shooting (Tedrake 2022) for planning, a simple trajectory optimization algorithm, where the planner samples 400 trajectories composed of primitive actions and learned options, then executes the trajectory that minimizes the Euclidean distance to the goal. Trajectory optimization is repeated until the goal state is reached or 10, 000 actions are sampled.