Spatio-temporal Partial Sensing Forecast of Long-term Traffic
Authors: Zibo Liu, Zhe Jiang, Zelin Xu, Tingsong Xiao, Zhengkun Xiao, Yupu Zhang, Haibo Wang, Shigang Chen
TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on several real-world traffic datasets demonstrate its superior performance. |
| Researcher Affiliation | Academia | Zibo Liu EMAIL Zhe Jiang EMAIL Zelin Xu EMAIL Tingsong Xiao EMAIL Zhengkun Xiao EMAIL Yupu Zhang EMAIL Department of Computer & Information Science & Engineering, University of Florida Haibo Wang EMAIL Department of Computer Science, University of Kentucky Shigang Chen EMAIL Department of Computer & Information Science & Engineering, University of Florida |
| Pseudocode | No | The paper describes the model components and training process using mathematical equations and textual descriptions, but it does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | Yes | Our source code is at https://github.com/zbliu98/SLPF |
| Open Datasets | Yes | We use five widely used public traffic flow datasets, METRA-LA, PEMSBAY, PEMS03, PEMS04 and PEMS08. The datasets are provided in the STSGCN github repository at https://github.com/Davidham3/STSGCN/, and DCRNN github repository at https://github.com/liyaguang/DCRNN. |
| Dataset Splits | Yes | We split each dataset into three subsets in 3:1:1 ratio for training, validation, and testing. |
| Hardware Specification | Yes | Experiments were conducted on a server with AMD EPYC 7742 64-Core Processor @ 2.25 GHz, 500 GB of RAM, and NVIDIA A100 GPU with 80 GB memory. |
| Software Dependencies | No | The paper mentions using Adam W as the optimizer, CNN layers, MLP, and relu activation function but does not specify version numbers for general software dependencies like Python, PyTorch, TensorFlow, or CUDA. |
| Experiment Setup | Yes | For the embedding parameters, N dow = 7, N tod = 288, and the dimension is d = 64. We use two layers of CNN with residual connection as the MLP structure in each of the three steps in Fig. 3. The input passes through one layer of CNN, the relu activation function (Agarap, 2018), the dropout layer (Srivastava et al., 2014) with 0.15 dropout rate, and then the second layer of CNN. A residual connection (Szegedy et al., 2017) of the original input then is added to the result for the final output. α in the aggregation step is 0.5. During training, we set the batch size at 64, the learning rate at 10 3, and the weight decay at 10 3 for all datasets. The optimizer is Adam W (Loshchilov & Hutter, 2019). We use Mean Absolute Error (MAE) as the loss function. |