Cauchy–Schwarz Regularized Autoencoder

Authors: Linh Tran, Maja Pantic, Marc Peter Deisenroth

JMLR 2022 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We provide empirical studies on a range of datasets and show that our objective improves upon variational auto-encoding models in density estimation, unsupervised clustering, semi-supervised learning, and face analysis. Keywords: Generative models, Cauchy Schwarz divergence, constrained optimization, auto-encoding models, face analysis
Researcher Affiliation Academia Linh Tran EMAIL Department of Computing, Imperial College London Maja Pantic EMAIL Department of Computing, Imperial College London Marc Peter Deisenroth EMAIL Centre for Artificial Intelligence, University College London
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks. It provides mathematical derivations but no step-by-step algorithmic procedures.
Open Source Code No The paper does not contain an explicit statement about the release of source code for the described methodology, nor does it provide a direct link to a code repository.
Open Datasets Yes For density estimation, k NN clustering, and semi-supervised learning we carried out experiments using five image datasets: static MNIST (Larochelle and Murray (2011)), dynamic MNIST (Salakhutdinov and Murray (2008)), Omniglot (Lake et al. (2015)), Caltech 101 Silhouette (Marlin et al. (2010)) and CIFAR10 (Krizhevsky and Hinton (2009)). For semi-supervised facial action unit recognition we used DISFA (Mavadati et al. (2013)) and FERA2015 (Valstar et al. (2015)).
Dataset Splits Yes Table 5: Setups of all datasets used for evaluation. Binarization is only used for Static MNIST, Dynamic MNIST, Omniglot, Caltech101, CIFAR10. For DISFA and BP4D+ we have three different sample sizes for train, validation and test due to 3-fold cross validation. For semi-supervised learning of DISFA and FERA2015, we perform optimization in two phases. First, we only trained in an unsupervised fashion without any labels. Subsequently, we used the pre-trained model to include semi-supervised training with labels. Further, we also used iterative balanced batches during training to counter the imbalance of both datasets label distribution.
Hardware Specification No The paper mentions different model architectures like 'MLP-based models' and 'convolutional models with residual blocks' for different datasets, but it does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used to run the experiments.
Software Dependencies No For training all other models, we used the ADAM algorithm (Kingma and Ba (2015)), where we set the learning rate to 5e-4 and batch size to 100. Additionally, we used a linear warm-up (Bowman et al. (2015)) for 100 epochs to avoid early collapse of the latent variable due to the divergence regularization.
Experiment Setup Yes For training all other models, we used the ADAM algorithm (Kingma and Ba (2015)), where we set the learning rate to 5e-4 and batch size to 100. Additionally, we used a linear warm-up (Bowman et al. (2015)) for 100 epochs to avoid early collapse of the latent variable due to the divergence regularization. During training, we used early-stopping with a look ahead of 100 iterations to prevent over-fitting. Table 4: Fixed and variable hyperparameters for unsupervised and semisupervised learning.