Simultaneous Phase Retrieval and Blind Deconvolution via Convex Programming
Authors: Ali Ahmed, Alireza Aghasi, Paul Hand
JMLR 2019 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Additionally, we provide an alternating direction method of multipliers (ADMM) implementation and provide numerical experiments that verify the theory. ... Numerical experiments show that, using this algorithm, one can successfully recover a blurred image from the magnitude only measurements of its Fourier spectrum. ... In Figure 2 we present the phase portrait associated with the proposed convex framework. To obtain the diagram on the left panel, for each fixed value of m, we run the algorithm for 100 different combinations of n and k, each time using independently generated Gaussian matrices B and C. If the algorithm converges to a sufficiently close neighbourhood of the ground-truth solution (a relative error of less than 1% with respect to the ℓ2 norm), we label the experiment as successful. Figure 2 shows the collected success frequencies, where solid black corresponds to 100% success and solid white corresponds to 0% success. |
| Researcher Affiliation | Academia | Ali Ahmed EMAIL Department of Electrical Engineering Information Technology University Lahore, Pakistan Alireza Aghasi EMAIL Department of Business Analytics Georgia State University Atlanta, GA, USA Paul Hand EMAIL Department of Mathematics and Khoury College of Computer Sciences Northeastern University Boston, MA, USA |
| Pseudocode | No | The paper describes the ADMM scheme for optimization (Section 2) by outlining the variable updates: "each variable update at the k-th iteration is performed by minimizing L with respect to that variable while fixing the others." It then details "Performing the X-update" and "Performing the u-update" in prose, describing mathematical operations and conditions. However, it does not present these steps within a formal pseudocode or algorithm block (e.g., labeled "Algorithm 1"). |
| Open Source Code | Yes | 1. An implementation of our solver is publicly available at: https://github.com/branchhull/BDPR |
| Open Datasets | No | The paper uses synthetic data generated for its numerical experiments: "We consider the noiseless case with i.i.d. Gaussian matrices B and C." It refers to |
| Dataset Splits | No | The paper discusses generating "independently generated Gaussian matrices B and C" for its numerical experiments but does not mention standard dataset splitting methodologies like train/test/validation splits, cross-validation, or specific percentages/counts for data partitioning. |
| Hardware Specification | No | The paper describes numerical experiments and an ADMM implementation but does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used to run these experiments. |
| Software Dependencies | No | The paper mentions using "standard optimization toolboxes" and specifically notes, "we use quasi-Newton methods with cubic line search as implemented in Schmidt (2005)" with a link to "Software available at http://www.cs.ubc.ca/schmidtm/Software/minFunc.htm, 2005". While it references the source of a method, it does not provide specific version numbers for any libraries, frameworks, or programming languages used (e.g., Python 3.x, PyTorch 1.x, MATLAB R20xx). |
| Experiment Setup | Yes | In Figure 2 we present the phase portrait associated with the proposed convex framework. To obtain the diagram on the left panel, for each fixed value of m, we run the algorithm for 100 different combinations of n and k, each time using independently generated Gaussian matrices B and C. If the algorithm converges to a sufficiently close neighbourhood of the ground-truth solution (a relative error of less than 1% with respect to the ℓ2 norm), we label the experiment as successful. Figure 2 shows the collected success frequencies, where solid black corresponds to 100% success and solid white corresponds to 0% success. For an empirically selected constant c, the success region almost perfectly stands on the left side of the line n + k = cm log 2 m. |