Tweedie Moment Projected Diffusions for Inverse Problems
Authors: Benjamin Boys, Mark Girolami, Jakiw Pidstrigach, Sebastian Reich, Alan Mosca, Omer Deniz Akyildiz
TMLR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, Section 6 will present experiments on Gaussian mixtures, image inpainting and super-resolution, demonstrating quantitative and qualitative improvements provided by TMPD. |
| Researcher Affiliation | Collaboration | Benjamin Boys EMAIL Department of Engineering, University of Cambridge, Cambridge, United Kingdom; Alan Mosca EMAIL n Plan |
| Pseudocode | Yes | Algorithm 1 TMPD-D (Ancestral sampling, VP) input y, σy |
| Open Source Code | Yes | The code for all of the experiments and instructions to run them are available at github.com/bb515/tmpdjax and github.com/bb515/tmpdtorch. |
| Open Datasets | Yes | We consider inpainting and super-resolution problems on the FFHQ 256 256 (Karras et al., 2019) and CIFAR-10 32 32 (Krizhevsky et al., 2009) datasets. |
| Dataset Splits | Yes | We use a DDPM sampler, on FFHQ 256 256 using 1k validation images... We next compare performance to TMPD across VP and VE-SDE samplers and a range of noise levels on CIFAR-10 64 64 using 1k validation images... We chose the DPS scale hyperparameter by optimising LPIPS, MSE, PSNR and SSIM on a validation set of 128 images (see Fig. 9 for an example). |
| Hardware Specification | Yes | BB gratefully acknowledges the EPSRC for funding this research through the EPSRC Centre for Doctoral Training in Future Infrastructure and Built Environment: Resilience in a Changing World (EPSRC grant reference number EP/S02302X/1); and the support of n Plan, and in particular Damian Borowiec and Peter A. Zachares, for the invaluable facilitation of work that was completed whilst on internship with n Plan and access to A100 GPUs. |
| Software Dependencies | No | The paper mentions 'github.com/bb515/tmpdjax and github.com/bb515/tmpdtorch' which implies the use of JAX and PyTorch, but no specific version numbers for these or any other software dependencies are provided in the text. |
| Experiment Setup | Yes | We use 1000 timesteps for the time-discretization. For the Markov chain methods we use DDPM and for the SDE methods we use an Euler-Maruyama discretization... For super-resolution, we use a downsampling ratio of 4 (256 256 64 64) and bicubic interpolation; for box mask inpainting we mask out 128 128 region and for random mask inpainting we choose a random mask for each image masking between 30% and 70% of the pixels. Images are normalized to the range [0, 1] and it is on this scale that we add Gaussian measurement noise with standard deviation σy {0.01, 0.05, 0.1, 0.2}... we set the DDIM hyperparameter η = 1.0... We chose the DPS scale hyperparameter by optimising LPIPS, MSE, PSNR and SSIM on a validation set of 128 images (see Fig. 9 for an example). We found that static thresholding (clipping the denoised image estimate to a range [0, 1] at each sampling step) is critical for the stability and performance of both DPS-D and ΠGDM-D. |