Multi-view Learning as a Nonparametric Nonlinear Inter-Battery Factor Analysis

Authors: Andreas Damianou, Neil D. Lawrence, Carl Henrik Ek

JMLR 2021 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We further show experimental results on several different types of multi-view data sets and for different kinds of tasks, including exploratory data analysis, generation, ambiguity modelling through latent priors and classification.
Researcher Affiliation Collaboration Andreas Damianou EMAIL Amazon, Cambridge, United Kingdom Neil D. Lawrence EMAIL University of Cambridge, United Kingdom Carl Henrik Ek EMAIL University of Cambridge, United Kingdom
Pseudocode Yes Algorithm 1 Inference algorithm in MRD, assuming two sets of views YA and YB
Open Source Code No The text does not contain a specific link to source code or an explicit statement about its public release. The URL provided (http://git.io/vw Lh H) points to online videos, not code.
Open Datasets Yes The paper uses several publicly available datasets, each with proper citation: 'Yale face database B (Georghiades et al., 2001)', 'data set of Agarwal and Triggs (2006)', 'oil flow database (Bishop and James, 1993)', and 'AVletters database (Matthews et al., 2002)'.
Dataset Splits Yes For the Pose Estimation experiment: 'We used a subset of 5 sequences, totaling 649 frames... A separate walking sequence of 158 frames was used as a test set.' For AVletters: 'letters B , M and T were left out of the training set completely to be used at test time. For each modality, we thus had 69 rows (23 letters * 3 trials)... In the test set, each modality had only 9 rows (3 letters * 3 trials).' Table 2 further details the view/row split.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory amounts used for running the experiments.
Software Dependencies No The paper does not specify any software dependencies with version numbers (e.g., programming languages, libraries, or frameworks with their respective versions) used for implementation or experimentation.
Experiment Setup Yes The paper specifies experimental setup details such as the initialization of latent dimensions (q). For example, 'The model is initialized with q = 8 latent dimensions' for toy data, 'q = 14 latent dimensions' for Yale Faces, and 'q = 15 latent dimensions' for Pose Estimation. It also mentions 'In our experiments we use ε = 10^-3' for the threshold value in latent space segmentation.