Diffusion-Based Planning for Autonomous Driving with Flexible Guidance

Authors: Yinan Zheng, Ruiming Liang, Kexin ZHENG, Jinliang Zheng, Liyuan Mao, Jianxiong Li, Weihao Gu, Rui Ai, Shengbo Li, Xianyuan Zhan, Jingjing Liu

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Evaluations on the large-scale real-world autonomous planning benchmark nu Plan and our newly collected 200-hour delivery-vehicle driving dataset demonstrate that Diffusion Planner achieves state-of-the-art closed-loop performance with robust transferability in diverse driving styles. 5 EXPERIMENTS
Researcher Affiliation Collaboration 1 Tsinghua University 2 Institute of Automation, Chinese Academy of Sciences 3 The Chinese University of Hong Kong 4 Shanghai Jiao Tong University 5 HAOMO.AI 6 Shanghai Artificial Intelligence Laboratory
Pseudocode No No section or figure explicitly labeled "Pseudocode" or "Algorithm" was found in the paper. The methodology is described in narrative text and mathematical formulations.
Open Source Code No Project website: https://zhengyinan-air.github.io/Diffusion-Planner/. We have collected and evaluated a new 200-hour delivery-vehicle dataset, which is compatible with the nu Plan framework, and we will open-source it.
Open Datasets Yes Evaluations on the large-scale real-world autonomous planning benchmark, nu Plan (Caesar et al., 2021), to compare Diffusion Planner with other stateof-the-art planning methods.
Dataset Splits Yes We use the training data from the nu Plan dataset and sample 1 million scenarios for our training set. The Val14 (Dauner et al., 2023b), Test14, and Test14-hard benchmarks (Cheng et al., 2023) are utilized, with all experimental results tested in both closed-loop non-reactive and reactive modes.
Hardware Specification Yes Training was conducted using 8 NVIDIA A100 80GB GPUs, with a batch size of 2048 over 500 epochs, with a 5-epoch warmup phase. ...the model achieves an inference frequency of 20 Hz on a single A6000 GPU.
Software Dependencies No The paper mentions using "Adam W optimizer" and "DPM-Solver++" but does not provide specific version numbers for these or any other software dependencies.
Experiment Setup Yes Training was conducted using 8 NVIDIA A100 80GB GPUs, with a batch size of 2048 over 500 epochs, with a 5-epoch warmup phase. We use Adam W optimizer with a learning rate of 5e 4. We report the detailed setup in Table 5.