Conditional Sampling of Variational Autoencoders via Iterated Approximate Ancestral Sampling

Authors: Vaidotas Simkus, Michael U. Gutmann

TMLR 2023 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental we systematically outline the pitfalls in the context of VAEs, propose two original methods that address these pitfalls, and demonstrate an improved performance of the proposed methods on a set of sampling tasks... Evaluate the samplers on a set of conditional sampling tasks: (semi-)synthetic, where sampling from the ground truth conditional distributions is computationally tractable, and real-world missing data imputation tasks, where the ground truth distribution is not available.
Researcher Affiliation Academia Vaidotas Simkus EMAIL Michael U. Gutmann EMAIL School of Informatics University of Edinburgh
Pseudocode Yes Algorithm 1 Adaptive collapsed-Metropolis-within-Gibbs... Algorithm 2 Latent-adaptive importance resampling
Open Source Code Yes Detailed evaluation of the proposed methods is provided in section 5 and the code to reproduce the experiments is available at https://github.com/vsimkus/vae-conditional-sampling.
Open Datasets Yes 5.1 Mixture-of-Gaussians MNIST... 5.2 Real-world UCI data sets... 5.3 Omniglot data set... C.2 Mixture-of-Gaussians MNIST... C.3 UCI data sets... C.4 Handwritten character Omniglot data set
Dataset Splits Yes We now evaluate the proposed methods on real-world data sets from the UCI repository... on incomplete test data with 50% missingness... We then evaluate the existing and proposed methods for conditional imputation of test set images that miss 1, 2, and 3 random quadrants.
Hardware Specification No The paper does not explicitly describe the hardware used for experiments, only mentioning architectures like "Conv Res Net".
Software Dependencies No The paper mentions "Adam optimiser" and deep learning frameworks implicitly through architectures like "Res Net", but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes To optimise the VAE model we have used the sticking-the-landing gradients (Roeder et al., 2017) and fit the model using batch size of 200 for 6000 epochs using Adam optimiser (Kingma & Ba, 2014) with a learning rate of 10 4... For all models, the variational and the generator (decoder) distributions were fitted to be in the diagonal Gaussian family... Adam optimiser (Kingma & Ba, 2014) with learning rate of 10 3 for a total of 200k stochastic gradient ascent steps... using batch size of 512... while using 8 Monte Carlo samples in each iteration... Adam optimiser (Kingma & Ba, 2014) with a learning rate of 10 4 and a cosine annealing schedule, for a total of 3k stochastic gradient ascent steps using a batch size of 200.