Optimizing Intermediate Representations of Generative Models for Phase Retrieval
Authors: Tobias Uelwer, Sebastian Konietzny, Stefan Harmeling
TMLR 2022 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | With extensive experiments on the Fourier phase retrieval problem and thorough ablation studies, we can show the benefits of our modified ILO and the new initialization schemes. Additionally, we analyze the performance of our approach on the Gaussian phase retrieval problem. 3 Experimental Evaluation |
| Researcher Affiliation | Academia | Tobias Uelwer EMAIL Department of Computer Science Technical University of Dortmund Sebastian Konietzny EMAIL Department of Computer Science Technical University of Dortmund Stefan Harmeling EMAIL Department of Computer Science Technical University of Dortmund |
| Pseudocode | No | The paper describes the steps of the PRILO method (Forward optimization, Back-projection, Refinement) in detail using equations (5-7) and prose, but does not present these steps within a clearly labeled pseudocode or algorithm block. |
| Open Source Code | No | Our implementation is based on open-source projects1 2 3. |
| Open Datasets | Yes | We evaluate our method on the following datasets: MNIST [20], EMNIST [7], FMNIST [40], and Celeb A [22]. |
| Dataset Splits | Yes | Each reported number was calculated on the reconstructions of 1024 test samples. We allow four random restarts and select the generated sample resulting in the lowest magnitude error. ... PRILO-LI based on Style GAN on Celeb A data: We omit different steps of our method and report the PSNR on a validation set consisting of 64 images. |
| Hardware Specification | Yes | In total our computations took two weeks on two NVIDIA A100 GPU. |
| Software Dependencies | No | The paper mentions that the VAE was trained using 'Adam with learning rate 10^-3' and that the implementation is 'based on open-source projects', but it does not specify version numbers for any software components or libraries used in their own implementation. |
| Experiment Setup | Yes | Additionally, we also found it helpful to apply a small normally-distributed perturbation with mean 0 and standard deviation σ = 0.05 to the predicted latent representation. ... In our experiments we set η = 0.02 and γ = 0.55. ... In our experiments, we set λperc = 5 10^-5 and λadv = 0.1. ... Detailed hyperparameter settings can be found in Appendix G. ... Appendix A: It was trained using a Bernoulli-likelihood and Adam with learning rate 10^-3 for 100 epochs. |