Auto-Regressive Moving Diffusion Models for Time Series Forecasting

Authors: Jiaxin Gao, Qinglong Cao, Yuntian Chen

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments conducted on seven widely used datasets demonstrate that our model achieves state-of-the-art performance, significantly outperforming existing diffusion-based TSF models.
Researcher Affiliation Academia 1Shanghai Jiao Tong University, Shanghai, China 2Ningbo Institute of Digital Twin, Eastern Institute of Technology, Ningbo, Zhejiang, China EMAIL; EMAIL; EMAIL
Pseudocode Yes Algorithm 1: Training. Require: Maximum number of diffusion steps T, which also represents the length of the historical/future series; Predefined coefficients α0:T . 1: repeat 2: Sample X0 1:T from the training set; 3: Sample t Uniform({1, 2, . . . , T}); 4: Generate the diffused sample Xt 1 t:T t using Equation (2), and calculate the evolution trend zt using Equation (3); 5: Use the devolution network R(.) to generate the predicted sample ˆ X0(Xt, t, θ) using Equation (5), and obtain the predicted evolution trend ˆz(t, θ) using Equation (6); 6: Calculate the loss Lθ using Equation (7); 7: Update the devolution network R(.) of ARMD by taking a gradient descent step on θL; 8: until converged. Algorithm 2: Sampling/Forecasting. Require: Historical series XT T +1:0; Trained devolution network R(.); Sampling interval t; Predefined coefficients α0:T . 1: for t = T to 0 by t do 2: Obtain ˆ X0(Xt, t, θ) using Xt 1 t:T t and t with the devolution network R(.), and calculate the corresponding evolution trend ˆz(t, θ) using Equation (6); 3: Update Xt 1 t:T t using Equation (10); 4: end for 5: Output the prediction of X0 1:T .
Open Source Code Yes Code https://github.com/daxin007/ARMD
Open Datasets Yes ARMD is evaluated on seven widely used benchmark datasets, including Solar Energy (Lai et al. 2018), Exchange (Lai et al. 2018), Stock (Yoon, Jarrett, and Van der Schaar 2019), and four ETT datasets (Zhou et al. 2021).
Dataset Splits No For all datasets, the historical length and prediction length are both set to 96. Following the evaluation methodology employed in a previous study (Zhou et al. 2021), we calculate the mean squared error (MSE) and mean absolute error (MAE) on z-score normalized data, enabling a consistent assessment of various variables.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, memory, or cloud instance types used for running its experiments. It only generally mentions 'High Performance Computing Centers' in the acknowledgements.
Software Dependencies No The paper does not provide specific software dependency details with version numbers (e.g., library names like PyTorch, TensorFlow, or specific Python versions) needed to replicate the experiment.
Experiment Setup Yes For all datasets, the historical length and prediction length are both set to 96.