LightGTS: A Lightweight General Time Series Forecasting Model
Authors: Yihang Wang, Yuying Qiu, Peng Chen, Yang Shu, Zhongwen Rao, Lujia Pan, Bin Yang, Chenjuan Guo
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | It achieves state-of-the-art forecasting performance on 9 real-world benchmarks in both zero-shot and full-shot settings with much better efficiency compared with existing time series foundation models. |
| Researcher Affiliation | Collaboration | 1East China Normal University, Shanghai, China 2Huawei Noah s Ark Lab, Shenzhen, China. Correspondence to: Chenjuan Guo <EMAIL>. |
| Pseudocode | No | The paper describes the methodology using mathematical equations and block diagrams (Figure 3) but does not include a structured pseudocode or algorithm block. |
| Open Source Code | Yes | Moreover, our code and pre-trained model checkpoints are available at https://github.com/decisionintelligence/Light GTS. |
| Open Datasets | Yes | We incorporate a diverse range of multi-source datasets for pre-training, which include portions from the Monash (Godahewa et al., 2021b), UEA (Bagnall et al., 2018), and UCR (Dau et al., 2019) time series datasets, as well as additional classic datasets (Zhang et al., 2017; Wang et al., 2024b; Liu et al., 2022; Mc Cracken & Ng, 2016; Taieb et al., 2012). The complete list of pre-training datasets is shown in Table 7. It s important to note that there is no overlap between these pre-training datasets and the target datasets. We use the following 9 multivariate time-series datasets for downstream forecasting task: ETT datasets1 contain 7 variates collected from two different electric transformers from July 2016 to July 2018. It consists of four subsets, of which ETTh1/ETTh2 are recorded hourly and ETTm1/ETTm2 are recorded every 15 minutes. Electricity2 contains the electricity consumption of 321 customers from July 2016 to July 2019, recorded hourly. Solar3 collects production from 137 PV plants in Alabama, recorded every 10 minutes. Traffic4 contains road occupancy rates measured by 862 sensors on freeways in the San Francisco Bay Area from 2015 to 2016, recorded hourly. Weather5 collects 21 meteorological indicators, such as temperature and barometric pressure, for Germany in 2020, recorded every 10 minutes. Exchange Rate6 collects the daily exchange rates of 8 countries. |
| Dataset Splits | Yes | We split each evaluation dataset into train-validation-test sets and detailed statistics of evaluation datasets are shown in Table 8. ... Table 8. The statistics of evaluation datasets. ... # Split 6:2:2 ... # Split 7:1:2 |
| Hardware Specification | Yes | We implemented Light GTS using PyTorch (Paszke et al., 2019), and all experiments were conducted on an NVIDIA A8000 80GB GPU. |
| Software Dependencies | No | We implemented Light GTS using PyTorch (Paszke et al., 2019), and all experiments were conducted on an NVIDIA A8000 80GB GPU. The optimization was performed using the ADAM optimizer (Kingma & Ba, 2014) with an initial learning rate of 5 10 4. The paper mentions key software components like PyTorch and ADAM optimizer but does not specify their version numbers. |
| Experiment Setup | Yes | The optimization was performed using the ADAM optimizer (Kingma & Ba, 2014) with an initial learning rate of 5 10 4. A learning rate decay strategy was applied using the Step LR scheduler to facilitate gradual reduction during pre-training. During pre-training, we use 𝑁= 10 as the number of historical tokens, 𝐾= 4 as the number of prediction tokens, 𝑃 = 48 as the reference patch size, and the batch size is set to 8192. |