STraj: Self-training for Bridging the Cross-Geography Gap in Trajectory Prediction

Authors: Zhanwei Zhang, Minghao Chen, Zhihong Gu, Xinkui Zhao, Zheng Yang, Binbin Lin, Deng Cai, Wenxiao Wang

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiment results on various cross-geography trajectory prediction benchmarks demonstrate the effectiveness of STraj. Code https://github.com/Zhanwei-Z/STraj... Extensive experiment results validate its effectiveness and generalization ability... As shown in Table 1, our STraj surpasses all competitive predictors by convincing margins across various cross-geography tasks in most cases... We conduct several ablation studies, which are conducted on the MIA PIT task with Lane GCN (Liang et al. 2020) and evaluated with K=1. Architecture Designs. As shown in Table 2, we compare the results of using different components.
Researcher Affiliation Collaboration 1State Key Lab of CAD&CG, Zhejiang University 2Hangzhou Dianzi University 3Beijing Automobile Works 4School of Software Technology, Zhejiang University 5FABU Inc.
Pseudocode Yes Algorithm 1: Algorithm of the Pseudo Label Update Strategy
Open Source Code Yes Code https://github.com/Zhanwei-Z/STraj
Open Datasets Yes We evaluate our proposed STraj on the widely used trajectory prediction datasets Argoverse 1 (Chang et al. 2019). Argoverse 1 comprises more than 300K real-world driving sequences collected in two geographically diverse cities, i.e., Miami (MIA) and Pittsburgh (PIT).
Dataset Splits Yes We split half of the validation sets as the test sets for the convenience of separately evaluating each domain. The detailed cross-geography UDA experiments on Argoverse 1 are as follows: MIA → PIT and PIT → MIA.
Hardware Specification Yes In the pre-training and training process, we exploit Adam (Kingma and Ba 2014) with learning rate 1.5 · 10−3 for 30 epochs, and train the model on four A6000 GPUs.
Software Dependencies No The paper mentions using Adam optimizer and building upon Lane GCN and HPNet, but does not specify software versions for programming languages, libraries, or frameworks like Python, PyTorch, or TensorFlow.
Experiment Setup Yes In the pre-training process, We set ρa, r and the weight of LMSE as 1, 10 and 0.01, respectively. As for the update strategy, TU and ρt are set as 3/2 and 2. We set Tc as a dynamic threshold, which exceeds half confidence scores of all target domain samples in the current epoch. In the trajectory-induced contrastive learning module, we set ρc of inter-domain and intra-domain ρc as 1 and 2, respectively. The trade-off parameter η is set as 0.1. Our STraj builds upon a popular predictor Lane GCN (Liang et al. 2020) and a state-of-the-art (SOTA) predictor HPNet(Tang et al. 2024) for Argoverse 1, following their default model parameters. In the pre-training and training process, we exploit Adam (Kingma and Ba 2014) with learning rate 1.5 · 10−3 for 30 epochs.