Interpreting Emergent Planning in Model-Free Reinforcement Learning

Authors: Thomas Bush, Stephen Chung, Usman Anwar, Adrià Garriga-Alonso, David Krueger

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We present the first mechanistic evidence that model-free reinforcement learning agents can learn to plan. This is achieved by applying a methodology based on concept-based interpretability to a model-free agent in Sokoban a commonly used benchmark for studying planning. Specifically, we demonstrate that DRC, a generic model-free agent introduced by Guez et al. (2019), uses learned concept representations to internally formulate plans that both predict the long-term effects of actions on the environment and influence action selection. Our methodology involves: (1) probing for planning-relevant concepts, (2) investigating plan formation within the agent s representations, and (3) verifying that discovered plans (in the agent s representations) have a causal effect on the agent s behavior through interventions. We also show that the emergence of these plans coincides with the emergence of a planning-like property: the ability to benefit from additional test-time compute. Finally, we perform a qualitative analysis of the planning algorithm learned by the agent and discover a strong resemblance to parallelized bidirectional search.
Researcher Affiliation Collaboration 1University of Cambridge, 2FAR AI, 3Mila, University of Montreal
Pseudocode Yes Algorithm 1 Agent-Shortcut Intervention 1: Short Route Squares All positions (x, y) on the short route 2: (x0, y0) The first square (x, y) of the long route. 3: Long Route Squares Dirs The first p squares (x, y) that the agent would step onto if following the longer route, and the direction DIR it would step onto them 4: for t in 1, 2, , Episode Length do 5: for (x, y) in Short Route Squares do Short-route intervention 6: c(x,y) c(x,y) + α w CA NEVER 7: if Agent has not moved onto (x0, y0) this episode then 8: for ((x, y), DIR) in Long Route Squares Dirs do Directional intervention 9: c(x,y) c(x,y) + α w CA DIR
Open Source Code No The paper does not explicitly state that source code for the methodology described in this paper is openly available. It references a GitHub link for a dataset used (Boxoban levels) but not for their own implementation.
Open Datasets Yes The agent is trained for 250 million transitions on the unfiltered Boxoban training set (Guez et al., 2018a) using a similar training setup as Guez et al. (2019) as explained in Appendix E.4. Appendix E.5 shows that, consistent with Guez et al. (2019), this agent exhibits planning-like behavior. [...] Arthur Guez, Mehdi Mirza, Karol Gregor, Rishabh Kabra, S ebastien Racani ere, Th eophane Weber, David Raposo, Adam Santoro, Laurent Orseau, Tom Eccles, et al. An investigation of model-free planning: boxoban levels. https://github.com/deepmind/boxoban-levels/, 2018a.
Dataset Splits Yes The training dataset is generated by running the agent for 3000 episodes on levels from the Boxoban unfiltered training dataset (Guez et al., 2018a). We test probes on a test set of transitions generated by running the agent for 1000 episodes on levels from the Boxoban unfiltered validation dataset.
Hardware Specification No This work was performed using resources provided by the Cambridge Service for Data Driven Discovery (CSD3) operated by the University of Cambridge Research Computing Service (www.csd3.cam.ac.uk), provided by Dell EMC and Intel using Tier-2 funding from the Engineering and Physical Sciences Research Council (capital grant EP/T022159/1), and Di RAC funding from the Science and Technology Facilities Council (www.dirac.ac.uk).
Software Dependencies No We train the agent using a discount rate of γ = 0.97 and V-trace target of λ = 0.97. The agent is trained by additionally imposing a L2 penalty of size 1e-3 on the action logits, L2 regularisation of strength 1e-5 on the policy and value heads, and adding an entropy penalty of strength 1e-2 on the policy. Optimisation is performed using propagation through time with an unroll length of 20. We use the Adam optimiser (Kingma & Ba, 2015) with a batch size of 16 and a learning rate that decays linearly from 4e-4 to 0.
Experiment Setup Yes The DRC agent we investigate is trained on 900k levels from the unfiltered training set of the Boxoban dataset (Guez et al., 2018a). The agent is trained in an actor-critic setting using IMPALA (Espeholt et al., 2018) for 250 million transitions. We train the agent using a discount rate of γ = 0.97 and V-trace target of λ = 0.97. The agent is trained by additionally imposing a L2 penalty of size 1e-3 on the action logits, L2 regularisation of strength 1e-5 on the policy and value heads, and adding an entropy penalty of strength 1e-2 on the policy. Optimisation is performed using propagation through time with an unroll length of 20. We use the Adam optimiser (Kingma & Ba, 2015) with a batch size of 16 and a learning rate that decays linearly from 4e-4 to 0.