Single-Shot Plug-and-Play Methods for Inverse Problems

Authors: Yanqi Cheng, Lipei Zhang, Zhenda Shen, Shujun Wang, Lequan Yu, Raymond H. Chan, Carola-Bibiane Schönlieb, Angelica I Aviles-Rivero

TMLR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate, through extensive numerical and visual experiments, that our method leads to better approximations. In this section, we describe the experiments undertaken to validate our proposed Single-Shot Plug-and-Play framework.
Researcher Affiliation Academia 1Department of Applied Mathematics and Theoretical Physics, University of Cambridge 2Department of Mathematics, City University of Hong Kong 3Biomedical Engineering, Hong Kong Polytechnic University 4Department of Statistics and Actuarial Science, The University of Hong Kong 5School of Data Science, Lingnan University 6Yau Mathematical Sciences Center, Tsinghua University
Pseudocode Yes Algorithm 1: Single-Shot Plug-and-Play
Open Source Code No The paper does not explicitly provide a link to a code repository, nor does it state that the code for the methodology will be released.
Open Datasets Yes We sourced the images with Creative Commons Licenses and resized to 512 × 384, and in the meantime tested on the selected data in Bevilacqua et al. (2012) and Zeyde et al. (2012), without resizing.
Dataset Splits No Unlike traditional methods that require extensive datasets, Single-Shot learning aims to make significant inferences from a single instance, or in some cases, a small set of instances. The experiments on Single-Shot Plug-and-Play methods (SS-Pn P) considered only a single image input in the whole pipeline.
Hardware Specification Yes The empirical studies are trained and tested on NVIDIA A10 GPU with 24GB RAM.
Software Dependencies No The Plug-and-Play phase ensued with the -Prox toolbox (Lai et al., 2023), adopting the default settings for all tasks. No version is provided for the toolbox, and other software (e.g. deep learning frameworks) are not mentioned with specific versions.
Experiment Setup Yes During the initial implicit neural representation (INR) pre-training phase, Gaussian noise with a standard deviation in the range of [0.001,0.5] was explored with 0.1 was set for all the experiments. The training was conducted over 100 iterations, with a network configuration comprising 2 hidden layers and 64 features per layer. The learning rate was set to 0.001. We then reconstruct the image for 5 ADMM iteration steps with dynamic noise strength and penalty parameter chosen by logarithmic descent that gradually decreases value between 35 and 30 over steps.