ST-ReP: Learning Predictive Representations Efficiently for Spatial-Temporal Forecasting
Authors: Qi Zheng, Zihao Yao, Yaying Zhang
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results across diverse domains demonstrate that the proposed model surpasses pre-training-based baselines, showcasing its ability to learn compact and semantically enriched representations while exhibiting superior scalability. Experiments are conducted on six spatial-temporal datasets from various domains. The results demonstrate that our model achieves superior downstream prediction accuracy compared to advanced self-supervised learning baselines. |
| Researcher Affiliation | Academia | The Key Laboratory of Embedded System and Service Computing, Ministry of Education, Tongji University, Shanghai 200092, China EMAIL |
| Pseudocode | No | The paper describes the methodology using textual explanations and architectural diagrams (Figure 2, 3, 4) but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code https://github.com/zhuoshu/ST-Re P |
| Open Datasets | Yes | Six datasets from three different domains are used for the experiments. (1) Transportation domain: PEMS04, PEMS08 (Song et al. 2020a,b), and CA (Liu et al. 2023). (2) Climate domain: Temperature, Huimidity (Rasp et al. 2020). (3) Energy domain: SDWPF (Zhou et al. 2022). |
| Dataset Splits | No | The paper mentions using 'training set', 'validation set', and 'test set' but does not specify the exact percentages or absolute sample counts for these splits for the main datasets. It mentions 'a small fraction of the representation samples (0.93% to 5.5%) as training data for downstream tasks' but this is for downstream training, not the primary dataset splits. |
| Hardware Specification | Yes | All experiments are conducted on a Linux server with one Intel(R) Xeon(R) Gold 5220 CPU @ 2.20 GHz and one 32GB NVIDIA Tesla V100-SXM2 GPU card. |
| Software Dependencies | No | The paper does not provide specific version numbers for software dependencies such as Python, PyTorch, or other libraries used for implementation, beyond mentioning a 'Linux server'. |
| Experiment Setup | Yes | We use Huber loss (Huber 1992) in this paper. Furthermore, the total loss is a linear combination of these three components: Ltotal = αLrecon + βLpred + γLMS, where α, β, and γ = 1 α β are weights of three parts. The batch size is fixed to 32. |