Is Noise Conditioning Necessary for Denoising Generative Models?
Authors: Qiao Sun, Zhicheng Jiang, Hanhong Zhao, Kaiming He
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We provide a theoretical analysis of the error caused by removing noise conditioning and demonstrate that our analysis aligns with empirical observations. We further introduce a noise-unconditional model that achieves a competitive FID of 2.23 on CIFAR-10, significantly narrowing the gap to leading noise-conditional models. We hope our findings will inspire the community to revisit the foundations and formulations of denoising generative models. |
| Researcher Affiliation | Academia | Qiao Sun * 1 Zhicheng Jiang * 1 Hanhong Zhao * 1 Kaiming He 1 1MIT. Correspondence to: Qiao Sun <EMAIL>, Zhicheng Jiang <jzc EMAIL>, Hanhong Zhao <EMAIL>, Kaiming He <EMAIL>. |
| Pseudocode | No | The paper describes algorithms and mathematical formulations using equations (e.g., Eq. 1, Eq. 4, Eq. 5, etc.) and descriptive text, but it does not include any explicitly labeled pseudocode blocks or algorithm boxes. |
| Open Source Code | No | The paper does not contain an explicit statement or a direct link indicating that the source code for their methodology is publicly released or available in supplementary materials. It mentions re-implementations but not code release: "For a fair comparison, all methods are based on our re-implementation as faithful as possible (see Appendix B.3)". |
| Open Datasets | Yes | Our main experiments are on class-unconditional generation on CIFAR-10 (Krizhevsky et al., 2009), with extra results on Image Net 32ˆ32 (Deng et al., 2009), and FFHQ 64ˆ64 (Karras et al., 2019). We also use the AFHQ-v2 dataset in Figure 3. |
| Dataset Splits | Yes | Our main experiments are on class-unconditional generation on CIFAR-10 (Krizhevsky et al., 2009), with extra results on Image Net 32ˆ32 (Deng et al., 2009), and FFHQ 64ˆ64 (Karras et al., 2019). For evaluation of the generative models, we calculate FID (Heusel et al., 2017) between 50,000 generated images and all available real images without any augmentation. For the CIFAR-10 dataset, we have N 50000 and d 3 ˆ 322 3072 |
| Hardware Specification | Yes | We implement our main code base using Google TPU and the JAX (Bradbury et al., 2018) platform, and run most of our experiments on TPU v2 and v3 cores. For FFHQ 64ˆ64 experiments, we directly use the code provided by Karras et al. (2022) and run it on 8 H100 GPUs. |
| Software Dependencies | No | The paper mentions using the JAX (Bradbury et al., 2018) platform and specific optimizers like Adam (Kingma & Ba, 2015) and RAdam (Liu et al., 2020), but it does not provide specific version numbers for JAX or any other software libraries or packages. |
| Experiment Setup | Yes | Hyperparameters. A table of selected important hyperparameters can be found in Table 5. For ICM and ECM we use the RAdam (Liu et al., 2020) optimizer, while for all other models we use the Adam (Kingma & Ba, 2015) optimizer. Also, we set the parameter β2 to 0.95 to stabilize the training process. Table 5: Selected important hyperparameters in our main experiments. Experiment Duration Warmup Epochs Batch Size Learning Rate EMA Schedule EMA Half-life Images Dropout |