Heavy-Tailed Diffusion Models

Authors: Kushagra Pandey, Jaideep Pathak, Yilun Xu, Stephan Mandt, Michael Pritchard, Arash Vahdat, Morteza Mardani

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, we show that our t-EDM and t-Flow outperform standard diffusion models in heavy-tail estimation on high-resolution weather datasets in which generating rare and extreme events is crucial. Through extensive experiments on the HRRR dataset (Dowell et al., 2022), we train both unconditional and conditional versions of these models. The results show that standard EDM struggles to capture tails and extreme events, whereas t-EDM performs significantly better in modeling such phenomena.
Researcher Affiliation Collaboration Kushagra Pandey1,2 , Jaideep Pathak1, Yilun Xu1, Stephan Mandt2, Michael Pritchard1,2, Arash Vahdat1:, Morteza Mardani1: NVIDIA1, University of California, Irvine 2 EMAIL, EMAIL
Pseudocode Yes Algorithm 1: Training (t-EDM) ... Algorithm 2: Sampling (t-EDM) (on page 7) and Algorithm 3: Training (t-Flow) ... Algorithm 4: Sampling (t-Flow) (on page 30).
Open Source Code No The paper does not provide an explicit statement about open-sourcing the code, nor does it include a link to a code repository.
Open Datasets Yes We adopt the High-Resolution Rapid Refresh (HRRR) (Dowell et al., 2022) dataset, which is an operational archive of the US km-scale forecasting model.
Dataset Splits Yes We only use data for the years 2019–2020 for training (17.4k samples) and the data for 2021 (8.7k samples) for testing; data before 2019 are avoided owing to non-stationaries associated with periodic version changes of the HRRR.
Hardware Specification Yes Model training is distributed across 4 DGX nodes, each with 8 A100 GPUs, with a total batch size of 512.
Software Dependencies No The paper mentions several techniques and architectures (e.g., DDPM++, Heun's method) and implies the use of common deep learning frameworks, but it does not specify any software dependencies with version numbers.
Experiment Setup Yes We adopt the same training hyperparameters from Karras et al. (2022) for training all models. Model training is distributed across 4 DGX nodes, each with 8 A100 GPUs, with a total batch size of 512. We train all models for a maximum budget of 60Mimg... We summarize our experimental setup in more detail for unconditional modeling in Table 5. Table 5 also details specific hyperparameters like σdata 1.0, ν, πmean, πstd, σmax, σmin 80, 0.002, NFE 18, ρ 7.