AGEM: Solving Linear Inverse Problems via Deep Priors and Sampling

Authors: Bichuan Guo, Yuxing Han, Jiangtao Wen

NeurIPS 2019 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate competitive results for signal denoising, image deblurring and image devignetting. Our approach outperforms the state-of-the-art DAE-based methods on all three tasks. 5 Experimental results. We compare our approach with state-of-the-art DAE-based methods, including DMSP, DAEP, and ADMM, on various noise-blind tasks: signal denoising, image deblurring and image devignetting. For each task, we train a single DAE and use it to evaluate all methods, so that they compete fairly. Table 1: Signal denoising, average RMSE of the test set. Table 3: Average PSNR for image deblurring. Table 4: Average PSNR for image devignetting. Ablation study. We study the behavior of AGEM in detail under the settings of the previous experiment.
Researcher Affiliation Academia Bichuan Guo Tsinghua University EMAIL Yuxing Han South China Agricultural University EMAIL Jiangtao Wen Tsinghua University EMAIL
Pseudocode Yes Algorithm 1 Estimate latent signal x and noise level Σ with the proposed methods AGEM and AGEM-ADMM. τ is the EM iteration number, initialized as 0. Σ(1) is initialized as σ2 tr I. 1: Train a DAE with quadratic loss and noise η N(0, σ2 tr I) 2: repeat τ τ + 1 3: Initialization: If τ = 1, x(1) τ 0, otherwise x(1) τ x(n MH) τ 1 4: E-step: Draw n MH samples {x(i) τ }n MH i=1 with MALA, discard the first 1/5 samples as burn-in 5: M-step: Use {x(i) τ }n MH i=n MH/5 to compute Σ(τ+1) 6: until τ = n EM 7: [AGEM] Compute ˆx average of {x(i) τ }n MH i=n MH/5; return (ˆx, Σ(n EM)) 8: [AGEM-ADMM] Use ADMM and noise level Σ(n EM) to compute ˆx; return (ˆx, Σ(n EM))
Open Source Code No Our code and all simulated datasets will be made available online.
Open Datasets Yes We perform image deblurring with the STL-10 unlabeled dataset [10], which contains 10^5 colored 96 x 96 images. [...] We perform image devignetting with the Celeb A dataset [42], which contains 0.2 million 218 x 178 colored face images, and a predefined train/val/test split.
Dataset Splits Yes Signal denoising: Among 6000 samples, 1000 samples are selected as the validation set and another 1000 samples as the test set. The rest are used for DAE training. Image deblurring: We select the last 400 images, the first/second half of which is used as the validation/test set. Image devignetting: We select the first 100 images from the predefined val/test set as our validation/test set.
Hardware Specification Yes We implement and train DAEs using Py Torch [33], all experiments were run on a Ubuntu server with two Titan X GPUs.
Software Dependencies No We implement and train DAEs using Py Torch [33], all experiments were run on a Ubuntu server with two Titan X GPUs.
Experiment Setup Yes All DAEs are trained by SGD with momentum 0.9 under the L2 reconstruction loss, early stopping is based on validation loss. For testing, n EM and n MH are set to sufficiently large values for stable convergence. Signal denoising: The DAE is a multilayer perceptron with Re LU activations and 3 hidden layers, each containing 2000 neurons. It is trained for 500 epochs with noise σtr = 0.01 and learning rate 0.1. We set n EM = 10, n MH = 1000, σprop is chosen by a grid search on [0.001, 0.5]. Image deblurring: It is trained for 250 epochs with noise σtr = 0.02 and learning rate 0.01. We set n EM = 10, n MH = 300, σprop is set to 0.02. Image devignetting: It is trained for 125 epochs with noise σtr = 0.02 and learning rate 0.1. We set n EM = 10, n MH = 200, σprop is set to 0.02.