TimeMixer++: A General Time Series Pattern Machine for Universal Predictive Analysis
Authors: Shiyu Wang, Jiawei LI, Xiaoming Shi, Zhou Ye, Baichuan Mo, Wenze Lin, Shengtong Ju, Zhixuan Chu, Ming Jin
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To verify the effectiveness of the proposed TIMEMIXER++ as a general time series pattern machine, we perform extensive experiments across 8 well-established analytical tasks, including (1) long-term forecasting, (2) univariate and (3) multivariate short-term forecasting, (4) imputation, (5) classification, (6) anomaly detection, as well as (7) few-shot and (8) zero-shot forecasting. Overall, as summarized in Figure 1, TIMEMIXER++ consistently surpasses contemporary state-of-the-art models in a range of critical time series analysis tasks, which is demonstrated by its superior performance across 30 well-known benchmarks and against 27 advanced baselines. |
| Researcher Affiliation | Academia | 1Griffith University 2The Hong Kong University of Science and Technology (Guangzhou) 3Massachusetts Institute of Technology 4Zhejiang University 5The State Key Laboratory of Blockchain and Data Security 6Hangzhou High-Tech Zone (Binjiang) Institute of Blockchain and Data Security |
| Pseudocode | No | The paper describes the methodology using textual explanations and figures (Figure 2, 5, 6, 7, 8, 9) but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks or figures. |
| Open Source Code | Yes | The source code and pretrained model will be provided in Git Hub (https://github.com/kwuking/Time Mixer). |
| Open Datasets | Yes | We evaluate the performance of different models for long-term forecasting on 8 well-established datasets, including Weather, Traffic, Electricity, Exchange, Solar-Energy, and ETT datasets (ETTh1, ETTh2, ETTm1, ETTm2). Furthermore, we adopt Pe MS and M4 datasets for short-term forecasting. Specifically, we used 10 multivariate datasets from the UEA Time Series Classification Archive (2018) for the evaluation of classification tasks. For anomaly detection, we selected datasets such as SMD (2019), SWa T (2016), PSM (2021), MSL, and SMAP (2018). |
| Dataset Splits | Yes | Table 8: Dataset detailed descriptions. The dataset size is organized in (Train, Validation, Test). ETTm1 7 {96, 192, 336, 720} (34465, 11521, 11521) 15min 0.46 Temperature. |
| Hardware Specification | Yes | All experiments were run three times, implemented in Pytorch (Paszke et al., 2019), and conducted on multi NVIDIA A100 80GB GPUs. |
| Software Dependencies | No | The paper mentions 'implemented in Pytorch (Paszke et al., 2019)' but does not specify a concrete version number for Pytorch or any other software component. |
| Experiment Setup | Yes | We set the initial learning rate as a range from 10 3 to 10 1 and used the ADAM optimizer (Kingma & Ba, 2015) with L2 loss for model optimization. And the batch size was set to be 512. We set the number of resolutions K to range from 1 to 5. Moreover, we set the number of Mixer Blocks L to range from 1 to 3. |