Non-stationary Diffusion For Probabilistic Time Series Forecasting

Authors: Weiwei Ye, Zhuopeng Xu, Ning Gui

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments conducted on nine real-world and synthetic datasets demonstrate the superior performance of Ns Diff compared to existing approaches.
Researcher Affiliation Academia 1School of Computer Science and Engineering, Central South University, Changsha, China.
Pseudocode Yes Algorithm 1 Training Algorithm 2 Inference
Open Source Code Yes Code is available at https: //github.com/wwy155/Ns Diff.
Open Datasets Yes Nine popular real-world datasets with diverse characteristics are selected, including Electricity (ECL), ILI, ETT{h1, h2, m1, m2}, Exchange Rage (EXG), Traffic, and Solar Energy (Solar). 1https://archive.ics.uci.edu/ml/datasets/Electricity Load Diagrams20112014 2https://gis.cdc.gov/grasp/fluview/fluportaldashboard.html 3http://pems.dot.ca.gov/ 4http://www.nrel.gov/grid/solar-power-data.html
Dataset Splits Yes For dataset splits, we follow previous time series prediction works (Wu et al., 2022; Li et al., 2024b): the ETT datasets are split 12/4/4 months for train/val/test, while others are split 7:1:2.
Hardware Specification No The paper does not explicitly describe the specific hardware used to run its experiments.
Software Dependencies No The paper mentions using PyTorch and Numpy but does not provide specific version numbers for these or any other software dependencies.
Experiment Setup Yes All experiments are run with seeds {1, 2, 3} for 10 epochs. We use the best result from the validation set to evaluate the model on the test set. The learning rate is set to 0.001, batch size of 32 and the number of timesteps T = 20, consistent with prior work (Rasul et al., 2021). We employ a linear noise schedule with β1 = 10 4 and βT = 0.02, in line with the setup used in conventional DDPM (Ho et al., 2020).