Monte Carlo Tree Diffusion for System 2 Planning

Authors: Jaesik Yoon, Hyeonseo Cho, Doojin Baek, Yoshua Bengio, Sungjin Ahn

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the proposed approach, MCTD, on a suite of tasks from the Offline Goal-conditioned RL Benchmark (OGBench) (Park et al., 2025), which spans diverse domains such as maze navigation with multiple robot morphologies (e.g., point-mass or ant) and robot-arm manipulation. Our chosen tasks point and antmaze navigation, multi-cube manipulation, and a newly introduced visual pointmaze jointly assess a planner s ability to handle long-horizon planning, sequential manipulation, and partial visual observability. In the visual pointmaze, an agent perceives RGB image observations of the 3D environment, thereby testing each method s resilience to partial observability and the ability to handle image-based planning. Detailed experimental settings are provided in Appendix A.
Researcher Affiliation Collaboration 1KAIST 2SAP 3Mila 4New York University. Correspondence to: Jaesik Yoon <EMAIL>, Sungjin Ahn <EMAIL>.
Pseudocode Yes Algorithm 1 Monte Carlo Tree Diffusion ... Algorithm 10 Additional Controller/Policy/Inverse Dynamics Model Integration
Open Source Code No The paper refers to the official repositories for baselines (Diffuser, Diffusion Forcing, Plan DQ) but does not provide a specific link or statement for the open-source code of their proposed method, MCTD.
Open Datasets Yes We evaluate the proposed approach, MCTD, on a suite of tasks from the Offline Goal-conditioned RL Benchmark (OGBench) (Park et al., 2025)
Dataset Splits No The paper mentions following OGBench task configurations and evaluating 10 random seeds per model. It also states: "The trajectory horizons in the original datasets are 1000 for Medium and Large, and 2000 for Giant. We used a shorter planning horizon than the dataset horizon to generate more data through the sliding window technique." However, it does not explicitly provide details about training/test/validation splits for the datasets used in their experiments.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU models, CPU types, or cloud instance specifications) used for running the experiments.
Software Dependencies No The paper mentions using 'Diffuser', 'Diffusion Forcing', and 'Plan DQ' as components or baselines, and a Transformer-based model. However, it does not provide specific version numbers for any software, libraries, or frameworks used, which would be necessary for reproduction.
Experiment Setup Yes For reproducibility, we detail the hyperparameters used in our experiments. These settings were selected based on prior work and empirical tuning to ensure stable training and evaluation. Nearly identical hyperparameters were applied consistently across all tasks, except where task-specific configurations were necessary, which are discussed in their respective sections. (e.g., Table 8. Diffuser Hyperparameters, Table 9. Diffusion Forcing Hyperparameters, Table 10. MCTD Hyperparameters, Table 11. Value-Learning Policy Hyperparameters)