Int2Planner: An Intention-based Multi-modal Motion Planner for Integrated Prediction and Planning

Authors: Xiaolei Chen, Junchi Yan, Wenlong Liao, Tao He, Pai Peng

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments are conducted on both the private dataset and the public nu Plan dataset. The results demonstrate that the proposed route intention points effectively improve the motion planning ability and Int2Planner achieves state-of-the-art planning performance on these datasets. We also deploy it in real-world vehicles and have conducted autonomous driving for hundreds of kilometers in urban areas.
Researcher Affiliation Collaboration 1 School of Artificial Intelligence & Department of CSE & Mo E Lab of AI, Shanghai Jiao Tong University 2 COWAROBOT Co. Ltd. 3 School of Electronic Engineering, University of South China
Pseudocode No The paper describes the methodology using textual explanations, mathematical equations, and block diagrams (Figure 1), but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Code and Datasets https://github.com/cxlz/Int2Planner
Open Datasets Yes We conduct experiments on a private dataset from an autonomous driving corporation... This dataset will be available at https://github.com/cxlz/Int2Planner. In addition, experiments are also conducted on nu Plan (Caesar et al. 2021), a large-scale benchmark for planning tasks in autonomous driving.
Dataset Splits Yes The private dataset contains 680,964 traffic scenarios and each scenario contains 6.5 seconds of trajectory data at 10 Hz, in which 626,459 scenarios are used as train set and the rest 54,505 scenarios remain as validation set. ... For the nu Plan dataset, we adjust th = 20 and tf = 80 to match the requirements of nu Plan benchmark, while keeping other hyper-parameters unchanged.
Hardware Specification Yes We train Int2Planner on 8 NVIDIA RTX 4090 GPUs for 30 epochs with a total batch size of 96.
Software Dependencies No The paper mentions using 'Adam W optimizer' but does not specify its version or the versions of any other key software libraries or programming languages used for implementation.
Experiment Setup Yes We train Int2Planner on 8 NVIDIA RTX 4090 GPUs for 30 epochs with a total batch size of 96. During the training process, Adam W optimizer is used with an initial learning rate of 1 10 4 and a weight decay of 0.01. For the private dataset, we use th = 15 historical points and tf = 50 future points for each agent. The feature dimension of context embedding, route embedding and the hidden dimension of attention layers are all set to 128. The distance interval dr to sample route intention points is set to 4 meters, the number of intention points Nq is set to 64 and the number of decoder iterations K is set to 6.