Variational Inference for Computational Imaging Inverse Problems

Authors: Francesco Tonolini, Jack Radford, Alex Turpin, Daniele Faccio, Roderick Murray-Smith

JMLR 2020 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive simulated experiments show the advantages of the proposed framework. The approach is then applied to two real experimental optics settings: holographic image reconstruction and imaging through highly scattering media.
Researcher Affiliation Academia Francesco Tonolini EMAIL School of Computing Science, University of Glasgow Jack Radford EMAIL School of Physics and Astronomy, University of Glasgow Alex Turpin EMAIL School of Computing Science, University of Glasgow Daniele Faccio EMAIL School of Physics and Astronomy, University of Glasgow Roderick Murray-Smith EMAIL School of Computing Science, University of Glasgow
Pseudocode Yes Appendix B. Algorithms In this supplementary section we detail the training procedure for the forward model pα(y|x) and inverse model rθ(x|y). The following pseudo-code details the training of the two models: Algorithm 1 Training the Forward Model pα(y|x) ... Algorithm 2 Training the Inverse Model rθ(x|y)
Open Source Code No The paper does not provide an explicit statement about releasing code for the described methodology, nor does it include a link to a code repository.
Open Datasets Yes The proposed framework is first quantitatively evaluated in simulated image recovery experiments, making use of the benchmark data sets Celeb A and CIFAR10 (Liu et al., 2015; Krizhevsky, 2009). ... We display 9600 MNIST digits on the DMD and record the corresponding camera observations. ... As target objects in these experiments are character-like shapes, the training images are taken from the NIST data set of hand-written characters (Johnson, 2010).
Dataset Splits Yes To simulate typical CI conditions, only a small subset of images degraded with the true transformation is made accessible. ... with K = 3,000 paired examples generated with the true transformation to train upon. ... We display 9600 MNIST digits on the DMD and record the corresponding camera observations. This data is used as high fidelity paired ground truths X and measurements Y . The remaining 50400 MNIST examples are used as the large set of unobserved ground truth signals X. ... 86,400 NIST images are used as the large data set of unobserved target examples X. ... only 1,000 examples of the 84,400 training targets were generated in this way and were taken as high-fidelity measurement estimates Y from corresponding ground truth images X . ... models are trained with K = 1,000 and K = 10,000 available training pairs. ... Reconstructions are performed with 2,000 test examples.
Hardware Specification Yes To simulate the experiments of interest here they take in the order of a few minutes per example to run on a Titan X GPU. ... requiring less than 100 ms per sample to run on a Titan X GPU.
Software Dependencies No The paper does not provide specific version numbers for any software dependencies used in the experiments.
Experiment Setup Yes Multi-fidelity forward models were built to have 300 hidden units in all deterministic layers, while the latent variable w was chosen to be 100-dimensional. The inverse models, both for the proposed framework and the comparative CVAE, were built with 2500 hidden units in the deterministic layers and latent variables z of 800 dimensions.