Removing Structured Noise using Diffusion Models

Authors: Tristan Stevens, Hans van Gorp, Faik C Meral, Junseob Shin, Jason Yu, Jean-luc Robert, Ruud Van Sloun

TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate strong performance gains across various inverse problems with structured noise, outperforming competitive baselines using normalizing flows, adversarial networks and various posterior sampling methods for diffusion models. This opens up new opportunities and relevant practical applications of diffusion modeling for inverse problems in the context of non-Gaussian measurement models.1
Researcher Affiliation Collaboration 1 Department of Electrical Engineering, Eindhoven University of Technology, The Netherlands 2 Philips Research North America, Cambridge MA, USA
Pseudocode Yes Algorithm 1: Joint posterior sampling with ΠGDM for score-based diffusion models
Open Source Code Yes 1Code: https://github.com/tristan-deep/joint-diffusion
Open Datasets Yes In the experiment, the signal score network sθ is trained on the Celeb A dataset (Liu et al., 2015) and the noise score network sϕ on the MNIST dataset, with 10000 and 27000 training samples, respectively. Images are resized to 64 64 pixels. We test on a randomly selected subset of 100 images.
Dataset Splits Yes In the experiment, the signal score network sθ is trained on the Celeb A dataset (Liu et al., 2015) and the noise score network sϕ on the MNIST dataset, with 10000 and 27000 training samples, respectively. Images are resized to 64 64 pixels. We test on a randomly selected subset of 100 images.
Hardware Specification Yes benchmarks are performed on a single 12GBytes NVIDIA Ge Force RTX 3080 Ti, see Table 5 in Appendix B.
Software Dependencies No The paper mentions using specific architectures (NCSNv2, Glow, DCGAN) but does not provide version numbers for general software dependencies such as Python, PyTorch, or CUDA.
Experiment Setup Yes Automatic hyperparameter tuning for optimal inference was performed for the proposed and all baseline methods on a small validation set of only 5 images (depending on the experiment as detailed in section 6). All parameters used for training and inference can be found in the provided code repository linked in the paper. A summary of the most important hyperparameters for each method can be found in Appendix C.