Breaking Silos: Adaptive Model Fusion Unlocks Better Time Series Forecasting
Authors: Zhining Liu, Ze Yang, Xiao Lin, Ruizhong Qiu, Tianxin Wei, Yada Zhu, Hendrik Hamann, Jingrui He, Hanghang Tong
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate the effectiveness of TIMEFUSE in various long-/short-term forecasting tasks, achieving near-universal improvement over the state-of-the-art individual models. ... We conduct extensive experiments to evaluate the effectiveness of TIMEFUSE, covering long-term and short-term forecasting, including 16 real-world benchmarks and 13 base forecasting models. |
| Researcher Affiliation | Collaboration | 1University of Illinois Urbana-Champaign 2IBM Research 3Stony Brook University. Correspondence to: Zhining Liu <EMAIL>, Hanghang Tong <EMAIL>. |
| Pseudocode | Yes | Algorithm 1 summarizes the main procedure of TIMEFUSE. ... Algorithm 1 TIMEFUSE |
| Open Source Code | Yes | Code is available at https://github.com/Zhining Liu1998/Time Fuse. |
| Open Datasets | Yes | For long-term forecasting, we evaluate our method on seven widely-used benchmarks, including the ETT datasets (with 4 subsets: ETTh1, ETTh2, ETTm1, ETTm2), Weather, Electricity, and Traffic, following prior studies (Wang et al., 2024a; Wu et al., 2023; 2021). For short-term forecasting, we use Pe MS (Chen et al., 2001), which encompass four public traffic network datasets (PEMS03/04/07/08), along with the EPF (Lago et al., 2021a) datasets for electricity price forecasting on five major power markets (NP, PJM, BE, FR, DE) spanning six years each. ... UCI Electricity Load Time Series Dataset. https://archive.ics.uci.edu/ml/datasets/ Electricity Load Diagrams20112014. Traffic Dataset. http://pems.dot.ca.gov/. |
| Dataset Splits | Yes | (1) ETT (Zhou et al., 2021) ... The train/val/test is 12/4/4 months. ... (3) Electricity (ecl) ... The train/val/test is 15/3/4 months. ... Table 7. Dataset detailed descriptions. The dataset size is organized in (Train, Validation, Test). Example for ETTm1: (34465, 11521, 11521). |
| Hardware Specification | Yes | All experiments are conducted on a single NVIDIA A100 80GB GPU. ... All runtime results are collected from a Linux server with NVIDIA V100-32GB GPU. |
| Software Dependencies | No | We use Pytorch (Paszke et al., 2019) to implement the fusor... All models are trained for 10 epochs using an ADAM optimizer (Kingma, 2014) with L2 loss... The paper mentions Pytorch and ADAM optimizer but does not specify their version numbers. |
| Experiment Setup | Yes | all models are trained for 10 epochs using an ADAM optimizer (Kingma, 2014) with L2 loss, we also perform early stopping with a patience of 3 based on validation set loss to prevent overfitting. ... The fusor is optimized using the ADAM (Kingma, 2014) optimizer and Huber loss, with a batch size of 32 and a learning rate of 1e-3. |