Faster Diffusion Sampling with Randomized Midpoints: Sequential and Parallel

Authors: Shivam Gupta, Linda Cai, Sitan Chen

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In Section A, we provide empirical evaluation of the randomized midpoint algorithm. ... Experimental setup. For all of our experiments, we use one NVIDIA A100 GPU. We evaluate our (sequential) randomized midpoint scheduler (predictor step only), a deterministic midpoint scheduler (where the midpoint is the average of the start and end times of a step), the default DDIM scheduler, and the default DDPM scheduler on the following datasets: CIFAR-10 (generated image dimension: 32 32), and Celeb AHQ (generated image dimension: 256 256). ... Evaluation. The performance of our scheduler and the default DDIM scheduler is evaluated by comparing the Fr echet Inception Distance (FID) scores (Heusel et al., 2017), which measure the quality of generated samples relative to the target distribution. ... Our results can be found in Figure 1.
Researcher Affiliation Academia Shivam Gupta1, Linda Cai2, Sitan Chen3 1UT Austin 2UC Berkeley 3Harvard SEAS EMAIL, EMAIL, EMAIL
Pseudocode Yes Algorithm 1 PREDICTORSTEP (SEQUENTIAL), Algorithm 2 CORRECTORSTEP (SEQUENTIAL), Algorithm 3 SEQUENTIALALGORITHM, Algorithm 4 PREDICTORSTEP (SEQUENTIAL), Algorithm 5 CORRECTORSTEP (SEQUENTIAL), Algorithm 6 SEQUENTIALALGORITHM, Algorithm 7 PREDICTORSTEP (PARALLEL), Algorithm 8 CORRECTOR STEP (PARALLEL), Algorithm 9 PARALLELALGORITHM, Algorithm 10 RANDOMIZEDMIDPOINTMETHOD (Shen & Lee, 2019), Algorithm 11 LOGCONCAVESAMPLING (Shen & Lee, 2019)
Open Source Code No The paper refers to using "public pretrained DDPM models release by (Ho et al., 2020)" and provides links for those third-party models. It does not contain an explicit statement by the authors about releasing their own source code for the described methodology.
Open Datasets Yes We evaluate our (sequential) randomized midpoint scheduler (predictor step only), a deterministic midpoint scheduler (where the midpoint is the average of the start and end times of a step), the default DDIM scheduler, and the default DDPM scheduler on the following datasets: CIFAR-10 (generated image dimension: 32 32), and Celeb AHQ (generated image dimension: 256 256).
Dataset Splits No The paper mentions using CIFAR-10 and Celeb AHQ datasets for evaluation with public pretrained models, but does not provide specific details on how these datasets were split for training, validation, or testing in their experimental setup.
Hardware Specification Yes Experimental setup. For all of our experiments, we use one NVIDIA A100 GPU.
Software Dependencies No Specifically, we use pytorch-fid3. The paper mentions using 'pytorch-fid' for evaluation but does not provide a specific version number or other software dependencies with version details.
Experiment Setup No The paper discusses the number of function calls (NFE) for different schedulers and mentions using public pretrained DDPM models, but it does not provide specific hyperparameter values, training configurations, or system-level settings for its own methods.