Learning Conditional Generative Models for Phase Retrieval
Authors: Tobias Uelwer, Sebastian Konietzny, Alexander Oberstrass, Stefan Harmeling
JMLR 2023 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive ablation studies demonstrate that all components of our approach are essential and justify the choice of network architecture. We extensively evaluate all variants of the PRCGAN on the Fourier phase retrieval problem using openly available benchmark datasets. Section 7 presents the results of our experiments for the Fourier and Gaussian phase retrieval problem. Furthermore, we analyze the out-of-distribution generalization of our method and perform an extensive ablation study. |
| Researcher Affiliation | Academia | Tobias Uelwer EMAIL Department of Computer Science Technical University of Dortmund Otto-Hahn-Straße 12, 44227 Dortmund, Germany Sebastian Konietzny EMAIL Department of Computer Science Technical University of Dortmund Otto-Hahn-Straße 12, 44227 Dortmund, Germany Alexander Oberstrass EMAIL Department of Computer Science Heinrich Heine University D usseldorf Universit atsstraße 1, 40225 D usseldorf, Germany Stefan Harmeling EMAIL Department of Computer Science Technical University of Dortmund Otto-Hahn-Straße 12, 44227 Dortmund, Germany |
| Pseudocode | No | The paper describes the methods using mathematical formulations and textual explanations but does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any explicit statements about releasing source code or provide links to a code repository. |
| Open Datasets | Yes | For our experiments, we consider six different datasets. Four of these datasets consist of 28 28 grayscale images, namely, MNIST (Le Cun et al., 1998), FMNIST (Xiao et al., 2017), EMNIST (Cohen et al., 2017) and KMNIST (Clanuwat et al., 2018). The other two datasets consist of color-images: the Celeb A dataset (Liu et al., 2015) and the well-known CIFAR-10 dataset (Krizhevsky et al., 2009). |
| Dataset Splits | Yes | We determined all hyperparameters by using a separate validation dataset. We use 1024 images in each test set to limit the computational time. |
| Hardware Specification | Yes | The runtimes of the learning-based methods are measured on an NVIDIA A100 GPU. |
| Software Dependencies | No | The paper mentions using the Adam optimizer (Kingma and Ba, 2014) and various network architectures (MLP, CNN, VAE, DCGAN) but does not specify any software libraries or frameworks with version numbers (e.g., Python, PyTorch, TensorFlow, CUDA versions). |
| Experiment Setup | Yes | We trained all previously mentioned learning-based models with a batch size of 32 for the MNIST-like datasets and 64 for the color-images, using the Adam optimizer (Kingma and Ba, 2014). We trained the PRCGAN for 100 epochs for all datasets except for CIFAR-10, where we increased the number of epochs to 250. We set λ = 100 for MNIST, EMNIST, and KMNIST and used λ = 1000 for FMNIST, Celeb A, and CIFAR-10. Analogous to the DPR approach, we optimized the latent variable z using 10,000 steps with a learning rate of 0.1. In the weight optimization of PRCGAN-W, we also used 10,000 steps but with a decreased learning rate of 10-6. |