Topo2Seq: Enhanced Topology Reasoning via Topology Sequence Learning
Authors: Yiming Yang, Yueru Luo, Bingkun He, Erlong Li, Zhipeng Cao, Chao Zheng, Shuqi Mei, Zhen Li
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental evaluations on the Open Lane-V2 dataset demonstrate the state-of-the-art performance of Topo2Seq in topology reasoning. |
| Researcher Affiliation | Collaboration | 1FNii-Shenzhen, Shenzhen, China 2SSE, CUHK-Shenzhen, Shenzhen, China 3SCSE, Wuhan University, Wuhan, China 4T Lab, Tencent, Beijing, China {yimingyang@link., lizhen@}cuhk.edu.cn |
| Pseudocode | No | The paper describes the methodology in prose and through diagrams, but it does not contain any explicit pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement about releasing source code or a link to a code repository. |
| Open Datasets | Yes | We evaluate our Topo2Seq model on the Open Lane-V2 dataset (Wang et al. 2024), a recently released open-source dataset specifically designed to focus on topology reasoning in autonomous driving. Open Lane-V2 is derived from Argoverse2 (Wilson et al. 2023) and nu Scenes (Caesar et al. 2020) datasets. |
| Dataset Splits | Yes | The training set includes approximately 27,000 frames, and the validation set contains around 4,800 frames. |
| Hardware Specification | Yes | Due to resource limitations, we train our network on 4 NVIDIA A100 GPUs with a total batch size of 4. |
| Software Dependencies | No | The paper mentions software components like FPN and BEVFormer but does not provide specific version numbers for these or other key software dependencies (e.g., Python, PyTorch). |
| Experiment Setup | Yes | The initial learning rate is 2 10 4 with a cosine annealing schedule during training. Adam W (Kingma and Ba 2015) is adopted as optimizer. The values of α1, α2, α3, α4, α5, and α6 are set to 0.025, 1.5, 3.0, 0.1, 5.0, and 1.0, respectively. We ensure that each sample underwent the same number of iterations with recent works. |