Bayesian Regularization of Latent Representation

Authors: Chukwudi Paul Obite, Zhi Chang, Keyan Wu, Shiwei Lan

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We establish connections between QEP-LVM and probabilistic PCA, demonstrating its superior performance through experiments on datasets such as the Swiss roll, oil flow, and handwritten digits.
Researcher Affiliation Academia Chukwudi Paul Obite Zhi Chang Keyan Wu Shiwei Lan School of Mathematical & Statistical Sciences, Arizona State University 901 S Palm Walk, Tempe, AZ 85287, USA
Pseudocode No The paper describes methods and derivations using mathematical notation and text, but does not include any explicitly labeled pseudocode or algorithm blocks. It explains procedures in paragraph form.
Open Source Code Yes All the numerical examples have been efficiently implemented in GPy Torch (Gardner et al., 2018) and the computer codes are publicly available at https://github.com/lanzithinking/Reg_Rep.
Open Datasets Yes First we consider the Swiss roll dataset (Marsland, 2014) usually used in manifold learning. Next, we demonstrate the behavior of QEP-LVM and contrast it with GP-LVM using the canonical multi-phase oil-flow dataset (Bishop & James, 1993)... Lastly, we consider the MNIST database (Lecun et al., 1998) consisting of 60,000 training and 10,000 testing handwritten digits of size 28 x 28.
Dataset Splits Yes Lastly, we consider the MNIST database (Lecun et al., 1998) consisting of 60,000 training and 10,000 testing handwritten digits of size 28 x 28.
Hardware Specification No The paper does not explicitly describe any specific hardware used for running the experiments. It mentions implementations in GPy Torch, which often leverages GPUs, but no specific models or configurations are provided.
Software Dependencies No All the numerical examples have been efficiently implemented in GPy Torch (Gardner et al., 2018). The paper mentions a specific software library (GPy Torch) and its publication year, but does not provide a specific version number for the library itself or any other key software components with version numbers.
Experiment Setup Yes Throughout this section, we use the kernel (6) and the Gamma prior Γ(4, 2) if varying q > 0 in the Bayesian framework. QEP-LVMs are trained for different qs with 25 inducing points. We set the latent dimension to be 10 and use t-SNE to project latent spaces... with 128 inducing points.