Stochastic Deep Restoration Priors for Imaging Inverse Problems

Authors: Yuyang Hu, Albert Peng, Weijie Gan, Peyman Milanfar, Mauricio Delbracio, Ulugbek S. Kamilov

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We numerically validate Sha RP on two inverse problems of the form y = Ax+e: (Compressive Sensing MRI (CS-MRI) and (b) Single Image Super Resolution (SISR). In both cases, e represents additive white Gaussian noise (AWGN). For the data-fidelity term in eq. (2), we use the ℓ2-norm loss for both problems. Quantitative performance is evaluated by Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM).
Researcher Affiliation Collaboration Yuyang Hu 1 Albert Peng 1 Weijie Gan 1 Peyman Milanfar 2 Mauricio Delbracio 2 Ulugbek S. Kamilov 1 1Wash U 2Google. Correspondence to: Ulugbek S. Kamilov <EMAIL>.
Pseudocode Yes Algorithm 1 Stochastic deep Restoration Priors (Sha RP) Algorithm 2 Supervised Training of CS-MRI Restoration Network Algorithm 3 Self-Supervised Training of CS-MRI Restoration Network Algorithm 4 Gaussian Deblurring Restoration network training Algorithm 5 MRI Super Resolution network training
Open Source Code No The paper does not provide concrete access to source code for the methodology described. It references third-party codebases used for training or test data, such as "official implementation of DDS2" (https://github.com/HJ-harry/DDS) and "I2SB" (https://github.com/NVlabs/I2SB), and a testset from Diff PIR (https://github.com/yuanzhi-zhu/DiffPIR/tree/main/testsets), but no specific link or statement about the open-sourcing of Sha RP's implementation.
Open Datasets Yes We utilized the open-access fast MRI dataset; further experimental details can be found in Section B.1 of the supplementary material. We randomly selected 100 images from the Image Net test set, as provided in Diff PIR1.
Dataset Splits Yes We simulated multi-coil subsampled measurements using T2-weighted human brain MRI data from the open-access fast MRI dataset, which comprises 4,912 fully sampled multi-coil slices for training and 470 slices for testing. Each slice has been cropped into a complex-valued image with dimensions 320 × 320. To ensure fairness, for each problem setting, each method both proposed and baseline is fine-tuned for optimal PSNR using 10 slices from a validation set separate from the test set. The same step size γ and regularization parameter τ are then applied consistently across the entire test set.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions software components like "Adam optimizer" and "U-Net architecture" but does not specify any version numbers for programming languages, libraries, or frameworks used (e.g., Python, PyTorch, TensorFlow, CUDA).
Experiment Setup Yes The model is trained with Adam optimizer with a learning rate of 5e-5. We select 1,000 different α values to train the model, following the α schedule outlined by I2SB (Liu et al., 2023).