UNSURE: self-supervised learning with Unknown Noise level and Stein's Unbiased Risk Estimate

Authors: Julián Tachella, Mike Davies, Laurent Jacques

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Throughout a series of experiments, we show that the proposed estimator outperforms other existing self-supervised methods on various imaging inverse problems. (Abstract) ... We show the performance of the proposed loss in various inverse problems and compare it with state-of-the-art self-supervised methods. All our experiments are performed using the deep inverse library (Tachella et al., 2023b). (Section 5, EXPERIMENTS)
Researcher Affiliation Academia Juli an Tachella CNRS & ENS Lyon Lyon, France EMAIL Mike Davies University of Edinburgh Edinburgh, UK EMAIL Laurent Jacques UCLouvain Louvain-la-Neuve, Belgium EMAIL
Pseudocode Yes Algorithm 1 UNSURE loss.
Open Source Code Yes Code associated to this paper is available at github.com/tachella/unsure.
Open Datasets Yes Gaussian denoising on MNIST We evaluate the proposed loss for different noise levels σ {0.05, 0.1, 0.2, 0.3, 0.4, 0.5} of Gaussian noise on the MNIST dataset. (Section 5, MNIST denoising) ... Colored Gaussian noise on DIV2K We evaluate the performance of the proposed method on correlated noise on the DIV2K dataset (Zhang et al., 2017)... (Section 5, Colored Gaussian noise on DIV2K) ... Computed tomography with Poisson-Gaussian noise on LIDC We evaluate a tomography problem where (resized) images of 128 128 pixels taken from the LIDC dataset... (Section 5, Computed tomography with Poisson-Gaussian noise on LIDC) ... Accelerated magnetic resonance imaging with Fast MRI We evaluate a single-coil 2 accelerated MRI problem using a subset of 900 images of the Fast MRI dataset for training and 100 for testing (Chen et al., 2021)... (Section 5, Accelerated magnetic resonance imaging with Fast MRI)
Dataset Splits Yes We train all the models on 900 noisy patches of 128 128 pixels extracted from the training set and test on the full validation set which contains images of more than 512 512 pixels. (Section 5, Colored Gaussian noise on DIV2K) ... using a subset of 900 images of the Fast MRI dataset for training and 100 for testing (Chen et al., 2021)... (Section 5, Accelerated magnetic resonance imaging with Fast MRI)
Hardware Specification No No specific hardware details (like GPU models, CPU types, or memory amounts) are mentioned in the paper. The paper focuses on software, datasets, and experimental parameters but not the underlying hardware.
Software Dependencies No All our experiments are performed using the deep inverse library (Tachella et al., 2023b). We use the Adam W optimizer for optimizing network weights θ with step size 5 10 4 and default momentum parameters and set α = 0.01, µ = 0.9 and τ = 0.01 for computing the UNSURE loss in Algorithm 1. (Section 5, EXPERIMENTS) ... We use the U-Net architecture of the deep inverse library (Tachella et al., 2023b) with no biases and an overall skip-connection as a backbone network in all our experiments, only varying the number of scales of the network across experiments. (Appendix F, EXPERIMENTAL DETAILS) Although software like 'deep inverse library' and 'Adam W optimizer' are mentioned, specific version numbers are not provided, which is required for reproducibility.
Experiment Setup Yes We use the Adam W optimizer for optimizing network weights θ with step size 5 10 4 and default momentum parameters and set α = 0.01, µ = 0.9 and τ = 0.01 for computing the UNSURE loss in Algorithm 1. (Section 5, EXPERIMENTS) ... We use the U-Net architecture of the deep inverse library (Tachella et al., 2023b) with no biases and an overall skip-connection as a backbone network in all our experiments, only varying the number of scales of the network across experiments. (Appendix F, EXPERIMENTAL DETAILS) ... MNIST denoising We use the U-Net architecture with 3 scales. (Appendix F, EXPERIMENTAL DETAILS) ... DIV2K denoising We use the U-Net architecture with 4 scales. (Appendix F, EXPERIMENTAL DETAILS) ... Computed Tomography on LIDC We use an unrolled proximal gradient algorithm with 4 iterations and no weight-tying across iterations. The denoiser is set as the U-Net architecture with 2 scales. (Appendix F, EXPERIMENTAL DETAILS) ... Accelerated MRI on Fast MRI We use an unrolled half-quadratic splitting algorithm with 7 iterations and no weight-tying across iterations. The denoiser is set as the U-Net architecture with 2 scales. (Appendix F, EXPERIMENTAL DETAILS)