Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]

Free Hunch: Denoiser Covariance Estimation for Diffusion Models Without Extra Costs

Authors: Severi Rissanen, Markus Heinonen, Arno Solin

ICLR 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate our method using synthetic Gaussian mixture model data and compare it against baselines on linear imaging inverse problems. Our experiments demonstrate that our more sophisticated covariance approximations reduce bias and improve results, particularly at lower diffusion step counts. We experiment on Image Net 256 256 (Deng et al., 2009) with an unconditional denoiser from Dhariwal & Nichol (2021). We evaluate the models on four linear inverse problems: Gaussian deblurring, motion deblurring, random inpainting, and super-resolution. We evaluate our models with peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM, Wang et al., 2004) and learned perceptual image patch similarity (LPIPS, Zhang et al., 2018) on the Image Net test set. We use the same set of 1000 randomly selected images for all models.
Researcher Affiliation Academia Severi Rissanen Aalto University EMAIL Markus Heinonen Aalto University Arno Solin Aalto University
Pseudocode Yes Algorithm 1: Time update Input: Σ0 | t(x), σ(t + t), σ(t), µ0 | t(x) Algorithm 2: Space update Input: Σ0 | t(xt), µ0 | t(x + x), µ0 | t(x), σ(t), x Algorithm 3: Free Hunch for Linear Inverse Problems (Euler solver, with diffusion parameters from (Karras et al., 2022)) Algorithm 4: Free Hunch Guidance Class, applicable with any solver
Open Source Code Yes Code for our approach is available at https://github.com/AaltoML/free-hunch.
Open Datasets Yes We experiment on Image Net 256 256 (Deng et al., 2009) with an unconditional denoiser from Dhariwal & Nichol (2021).
Dataset Splits Yes We use the same set of 1000 randomly selected images for all models. We used 100 samples from the Image Net validation set for tuning, and used these parameters for all experiments.
Hardware Specification Yes We acknowledge CSC IT Center for Science, Finland, for awarding this project access to the LUMI supercomputer, owned by the Euro HPC Joint Undertaking, hosted by CSC (Finland) and the LUMI consortium through CSC. The sweep to obtain the results in Table 1 was done with multiple NVIDIA V100 GPUs in a few hours, and can be obtained with a single V100s in less than a day of compute.
Software Dependencies No Our custom Py Torch implementation uses GPU acceleration and adjusts solver tolerance based on noise levels.
Experiment Setup Yes We use a linear schedule σ(t) = t, as advocated by Karras et al. (2022), and follow their settings for our image diffusion models otherwise as well. For the image experiments, we used σmax = 80 and σmax = 20 for the synthetic data. We use a simple Euler sampler for the synthetic data experiments and a 2nd order Heun method (Karras et al., 2022) for the image experiments. We solve the inverse in Eq. (23) using conjugate gradient, following Peng et al. (2024). We use a noise level σy = 0.1 for all measurement models (data scaled to [-1,1]).