Time-Frequency Disentanglement Boosted Pre-Training: A Universal Spatio-Temporal Modeling Framework
Authors: Yudong Zhang, Zhaoyang Sun, Xu Wang, Xuan Yu, Kai Wang, Yang Wang
IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments conducted on realworld datasets demonstrate that USTC significantly outperforms the advanced baselines in forecasting, imputation, and extrapolation across cities. We conduct extensive experiments on four real-world datasets, evaluating USTC on spatio-temporal forecasting, imputation, and extrapolation tasks. |
| Researcher Affiliation | Academia | 1 University of Science and Technology of China (USTC), Hefei, China 2 Suzhou Institute of Advanced Research, USTC, Suzhou, China 3 State Key Laboratory of Precision and Intelligent Chemistry, USTC, Hefei, China {zyd2020@mail., sunzhaoyang@mail., wx309@, yx2024@mail., zaizwk@mail., angyan@}ustc.edu.cn |
| Pseudocode | No | The paper describes the methodology in text and through architectural diagrams (Figure 1 and Figure 2), but it does not contain any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any explicit statements about releasing source code, nor does it provide a link to a code repository or mention code in supplementary materials. |
| Open Datasets | Yes | Four real-world widely used datasets are employed to evaluate our proposed framework, including PEMS-BAY, METR-LA [Li et al., 2018], Chengdu, and Shenzhen. These datasets comprise several months of traffic flow information, with the statistics listed in Table 1. |
| Dataset Splits | Yes | The dataset is divided into three parts: pre-training data from three cities, few-shot fine-tuning data, and testing data from the other city. We use the comprehensive data from three cities for pre-training and select one city s data for both few-shot fine-tuning and testing. For instance, if Shenzhen is the city chosen for fine-tuning, the complete datasets from PEMS-BAY, METR-LA, and Chengdu are used for pre-training. A three-day dataset from Shenzhen is allocated for few-shot fine-tuning, while the rest of the data in Shenzhen is reserved for testing. We use 1-day historical data to predict future 1-hour data. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU models, CPU types, or memory) used for running the experiments. |
| Software Dependencies | No | The paper does not list specific software dependencies with version numbers. |
| Experiment Setup | No | The paper specifies task details like prediction horizons and missing data ratios (e.g., 'predicting the future 1-hour data based on 1-day historical data', 'randomly masking observed data with a ratio of 30%'), and metrics (MAE, RMSE). However, it does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs, optimizer settings) in the main text. |