Lossy Compression with Pretrained Diffusion Models

Authors: jeremy vonderfecht, Feng Liu

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We apply the Diff C algorithm (Theis et al., 2022) to Stable Diffusion 1.5, 2.1, XL, and Flux-dev, and demonstrate that these pretrained models are remarkably capable lossy image compressors. ... we obtain competitive rate-distortion curves against other purpose-built state-of-the-art generative compression methods... Figure 2 shows our primary results, evaluating the performance of our methods on Kodak and DIV2K images...
Researcher Affiliation Academia Jeremy Vonderfecht & Feng Liu Department of Computer Science Portland State University EMAIL
Pseudocode Yes Algorithm 1 Sending x0 (Ho et al., 2020), Algorithm 2 Receiving, Algorithm 3 Optimal Diff C Timestep Schedule, Algorithm 4 PFR, Theis & Ahmed (2022)
Open Source Code Yes We offer the first publicly available implementation of a Diff C compression protocol on Git Hub1. 1https://github.com/jeremyiv/diffc
Open Datasets Yes Figure 1: Kodak images compressed using our method... Figure 2: Rate-distortion curves for generative compression methods across three sets of images... evaluate the performance of our methods on Kodak and DIV2K images...
Dataset Splits No Our algorithm works zero-shot , requiring no additional training... For the Kodak dataset, X consists of all 24 images. For Div2K we choose a random sample of 30 images. The paper describes using the entire Kodak dataset and a sample of Div2K for evaluation, but not traditional training/test/validation splits for model training, as it's a zero-shot method.
Hardware Specification Yes Table 1: Parameter count and average encoding and decoding times (in seconds) on an A40 GPU for Kodak/Div2k-1024 images.
Software Dependencies No The Tensor Flow implementation of reverse-channel coding from MIRACLE (Havasi et al., 2019) takes about 140 ms per 16-bit chunk of our data with an A40 GPU. Our custom CUDA kernel can avoid these memory requirements. The paper mentions PyTorch, TensorFlow, and CUDA but does not specify version numbers for these software components.
Experiment Setup Yes We found that a prompt guidance scale near 1 was optimal for communicating the noisy latent, and denoising with a guidance scale around 5 was optimal for maximizing CLIP scores. For simplicity we follow the probability flow with a standard 50-step DDIM scheduler (Song et al., 2020).