Unsupervised Training of Convex Regularizers using Maximum Likelihood Estimation

Authors: Hong Ye Tan, Ziruo Cai, Marcelo Pereyra, Subhadip Mukherjee, Junqi Tang, Carola-Bibiane Schönlieb

TMLR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments demonstrate that the proposed method produces image priors that are comparable in performance to the analogous supervised models for various image corruption operators, maintaining significantly better generalization properties when compared to end-to-end methods. Moreover, we provide a detailed theoretical analysis of the convergence properties of our proposed algorithm. [...] Section 4 evaluates the performance of our proposed method against various supervised and unsupervised baseline methods.
Researcher Affiliation Academia Hong Ye Tan EMAIL Department of Applied Mathematics and Theoretical Physics University of Cambridge, United Kingdom Ziruo Cai EMAIL School of Mathematical Sciences Shanghai Jiao Tong University, China Marcelo Pereyra EMAIL School of Mathematical & Computer Sciences Heriot-Watt University, Edinburgh Subhadip Mukherjee EMAIL Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, India Junqi Tang EMAIL School of Mathematics University of Birmingham, United Kingdom Carola-Bibiane Schönlieb EMAIL Department of Applied Mathematics and Theoretical Physics University of Cambridge, United Kingdom
Pseudocode Yes Algorithm 1 SAPG ULA Algorithm 2 Batched SAPG ULA
Open Source Code No The paper does not contain an explicit statement about releasing source code for the described methodology, nor does it provide a link to a code repository.
Open Datasets Yes Inspired by traditional machine learning, we aim to train a neural network regularizing prior with more data, such as on a standard image dataset like STL-10.
Dataset Splits No The paper mentions training on the STL-10 dataset and using 50 test images for evaluation (Tables 1, 2, 3), but it does not provide specific details about the training, validation, or test splits (e.g., percentages, sample counts, or methodology for creating splits) for the main dataset used in training the regularizer.
Hardware Specification No The paper does not specify any particular hardware components such as GPU models, CPU types, or memory used for conducting the experiments.
Software Dependencies No The paper does not list specific software dependencies with their version numbers required to replicate the experiments.
Experiment Setup Yes For SAPG, the step-sizes γ, γ for the likelihood and prior Markov chains Rγ,θ, Rγ ,θ respectively are given by γ = γ = 1e 4. ... To compute the MAP estimate, the negative log-posterior φθ is minimized using the Adam optimizer for up to 104 iterations due to being faster than gradient descent, with learning rate 10 3 and other parameters as default. ... For SAPG, the step-sizes for the likelihood and prior Markov chains are given by γ = 5e 6, γ = 1e 5 respectively.