TimeBase: The Power of Minimalism in Efficient Long-term Time Series Forecasting

Authors: Qihe Huang, Zhengyang Zhou, Kuo Yang, Zhongchao Yi, Xu Wang, Yang Wang

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on diverse real-world datasets show that Time Base achieves remarkable efficiency and secures competitive forecasting performance. Additionally, Time Base can also serve as a very effective plugand-play complexity reducer for any patch-based forecasting models. Code is available at https: //github.com/hqh0728/Time Base. ... In this section, we demonstrate the advantages of Time Base in competitive forecasting performance, extremely light efficiency and very effective plug-and-play function.
Researcher Affiliation Academia 1University of Science and Technology of China (USTC), Hefei, China 2Suzhou Institute for Advanced Research, USTC, Suzhou, China 3State Key Laboratory of Resources and Environmental Information System, Beijing, China. Qihe Huang <EMAIL>. Kuo Yang <EMAIL>. Zhongchao Yi <EMAIL>. Xu Wang <EMAIL>. Correspondence to: Zhengyang Zhou <EMAIL>, Yang Wang <EMAIL>.
Pseudocode No The paper describes the methodology using textual descriptions and mathematical equations (e.g., Eq. 1, 2, 3, 4, 5, 6, 7), but it does not include a clearly labeled 'Pseudocode' or 'Algorithm' block or figure.
Open Source Code Yes Code is available at https: //github.com/hqh0728/Time Base.
Open Datasets Yes We conduct experiments on 21 widely-used and publicly available real-world datasets, including 17 normal-scale benchmarks: ETTh1, ETTh2, ETTm1, ETTm21, Weather2, Electricity3, Traffic4,Solar Energy (Lai et al., 2018), Wind (Li et al., 2022), , METR-LA (Li et al., 2017), Exchange Rate (Lai et al., 2018), Zaf Noo (Poyatos et al., 2020) and Cze Lan (Poyatos et al., 2020), AQShunyi (Zhang et al., 2017), AQWan (Zhang et al., 2017), and 4 very large datasets: CA (4.52B), GLA (2.02B), GBA (1.24B),SD (0.38B) (Liu et al., 2024c).
Dataset Splits Yes Adhering to the established protocol in (Wu et al., 2021; Qiu et al., 2024; Liu et al., 2024c), we partition the datasets into training, validation, and test sets with a ratio of 6:2:2 for four ETT datasets, CA, GLA, GBA, SD, and 7:1:2 for the remaining datasets.
Hardware Specification No The paper mentions that "The AI-driven experiments, simulations and model training are performed on the robotic AI-Scientist platform of Chinese Academy of Science." and refers to "GPU memory usage" and "CPU inference speed", but does not provide specific details on the models or types of GPUs or CPUs used.
Software Dependencies Yes We build Time Base using Py Torch 1.13.0 (Paszke et al., 2019). The model is trained with the Adam optimizer (Kingma, 2014) with L2 loss over 30 epochs.
Experiment Setup Yes We build Time Base using Py Torch 1.13.0 (Paszke et al., 2019). The model is trained with the Adam optimizer (Kingma, 2014) with L2 loss over 30 epochs. After the first three epochs, a learning rate decay of 0.8 is applied, and early stopping is employed with a patience threshold of five epochs. ... The segment length P is set to the natural period of the dataset (e.g., P = 24 for ETTh1), or respectively shorter when dealing with datasets that exhibit extremely long periods (e.g., P = 4 for Weather). We perform a grid search for Time Base to find the optimal hyperparameters, specifically for the regularization parameter λorth = [0.00, 0.04, 0.08, 0.12, 0.16, 0.20] to accommodate variances between datasets, as well as the learning rate between 0.01 and 0.5. The loss function is MSE. ... max memory is recorded with a constant batch size of 12.