Efficient Traffic Prediction Through Spatio-Temporal Distillation
Authors: Qianru Zhang, Xinyi Gao, Haixin Wang, Siu Ming Yiu, Hongzhi Yin
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments verify that Light ST significantly speeds up traffic flow predictions by 5X to 40X compared to state-of-the-art spatio-temporal GNNs, all while maintaining superior accuracy. |
| Researcher Affiliation | Academia | Qianru Zhang 1, Xinyi Gao 2, Haixin Wang 3, Siu-Ming Yiu 1*, Hongzhi Yin 2 1 The University of Hong Kong 2 The University of Queensland 3 University of California, Los Angles |
| Pseudocode | Yes | The training process of our Light ST is elaborated in Algorithm 1 in Appendix A.1. |
| Open Source Code | Yes | Our codes are available at: https: //github.com/lizzyhku/TP/tree/main. |
| Open Datasets | Yes | In this study, we conduct a series of experiments using real-life traffic flow datasets from California, specifically the PEMS3, PEMS4, PEMS7, PEMS8 and Pe MS-Bay datasets released by (Song et al. 2020). |
| Dataset Splits | No | The paper mentions datasets used but does not explicitly provide training/validation/test dataset splits. It states 'The traffic data is aggregated into 5-minute time intervals, resulting in 12 points of data per hour.' but no splitting methodology. |
| Hardware Specification | Yes | We conduct the experiments on a server with 10 cores of Intel(R) Core(TM) i9-9820X CPU @ 3.30GHz, 64.0GB RAM, and 4 Nvidia Ge Force RTX 3090 GPU. |
| Software Dependencies | No | The paper describes the models and architectures (GNNs, TCNs, MLPs) but does not provide specific software dependencies or their version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | We present our results on Pe MSD8 and Pe MSD3 datasets in terms of MAE and RMSE in Figure 4. We summarie our observations as follows: 1) Figure 4 show the effect of the number of MLP layers (ranging from {1, 2, 3, 4, 5}) and varying batch size (ranging from 23, 24, 25, 26, 27 ) on performance. Our framework, Light ST, achieves the best performance on Pe MSD8 and Pe MSD3 when the number of layers is 3 and the batch size is 32. ... 2) λ1, λ2 serve as loss weights to control how strongly our prediction-level and embedding-level restrict the joint model training. |