C2F-TP: A Coarse-to-Fine Denoising Framework for Uncertainty-Aware Trajectory Prediction
Authors: Zichen Wang, Hao Miao, Senzhang Wang, Renzhi Wang, Jianxin Wang, Jian Zhang
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments are conducted on two real datasets NGSIM and high D that are widely adopted in trajectory prediction. The result demonstrates the effectiveness of our proposal. |
| Researcher Affiliation | Academia | Zichen Wang1, Hao Miao2, Senzhang Wang1*, Renzhi Wang1, Jianxin Wang1, Jian Zhang1 1Central South University 2Aalborg University EMAIL, EMAIL, EMAIL, EMAIL, EMAIL, EMAIL |
| Pseudocode | No | The paper describes the methodology using textual explanations and mathematical equations, but it does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code https://github.com/wangzc0422/C2F-TP |
| Open Datasets | Yes | The experiments are conducted on two datasets NGSIM (Deo and Trivedi 2018) and high D (Krajewski et al. 2018) that are widely adopted in trajectory prediction. |
| Dataset Splits | Yes | We split the dataset into training, validation, and testing sets with a splitting ratio of 7 : 2 : 1. |
| Hardware Specification | Yes | We implement our model with the Pytorch framework on a GPU server with NVIDIA 3090 GPU. |
| Software Dependencies | No | The paper mentions implementing the model with the 'Pytorch framework' but does not specify a version number or other software dependencies with their versions. |
| Experiment Setup | Yes | The parameters in the model are set as follows. We employ a 13 5 grid, which is defined around the target vehicle, where each column corresponds to a single lane, and the rows are separated by a distance of 15 feet. The hidden features of MLP layers are set to 32 with Re Lu as the activation function. To train a coarse-to-fine framework, we consider a two-stage training strategy, where the first stage trains a denoising module and the second stage focuses on training a spatial-temporal interaction module. The details of the two-stage prediction process are given in the associated code repository. Each trajectory is split into segments over a horizon (i.e., 8s), which contains the past (3s) and future (5s) positions at 5Hz. |