Multi-view Collaborative Gaussian Process Dynamical Systems

Authors: Shiliang Sun, Jingjing Fei, Jing Zhao, Liang Mao

JMLR 2023 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our model on two-view data sets, and our model obtains better performance compared with the state-of-the-art multi-view GPDSs. Section 6 provides extensive experimental evaluations to validate the effectiveness of our model
Researcher Affiliation Academia Shiliang Sun EMAIL School of Computer Science and Technology, East China Normal University, Shanghai 200062, P. R. China Department of Automation, Shanghai Jiao Tong University, Shanghai 200240, P. R. China Jingjing Fei EMAIL Jing Zhao EMAIL Liang Mao EMAIL School of Computer Science and Technology, East China Normal University, Shanghai 200062, P. R. China
Pseudocode Yes Algorithm 1 Prediction with the Mc GPDS
Open Source Code Yes 1. For an implementation of Mc GPDS in Matlab, see https://github.com/mcgpds/mcgpds.
Open Datasets Yes In this experiment, we use the human motion data which contain a set of 3D human poses and their corresponding silhouettes. The data are collected by Agarwal and Triggs (Ankur and Bill, 2006). In this experiment, we employ the CUAVE data which are composed of the videos showing a person speaking Arabic numerals and the corresponding Mel frequency cepstral coefficients (mfcc) features of the audio signals. In the final experiment, we examine Mc GPDS on a classification task. We use the Oil dataset, which contains 1000 12-dimensional examples from 3 classes. Following the setting of Damianou et al. (2012)
Dataset Splits Yes We use 566 frames for training which contain 5 sequences corresponding to walking motions in different directions. The test data is a separate walking sequence of 158 frames. We use 194 frames of videos and mfcc features as training data and 51 frames of videos for testing.
Hardware Specification No The paper does not explicitly describe the hardware used for running its experiments. No specific CPU or GPU models, or other hardware specifications, are mentioned.
Software Dependencies No The paper mentions 'Matlab' for implementation, but does not provide specific version numbers for Matlab or any other software dependencies, libraries, or frameworks used.
Experiment Setup Yes For comparison, all models are trained with the same initializations and we set J = 1 in the proposed model. For the toy data experiments, we use linear kernel without inducing points and the dimension of each view s private latent variable is set to 1. For the real-world data experiments, we use RBF kernel with the variance initialized to 1. We use 100 inducing points and the dimension of each view s private latent variable is set to 5 unless otherwise stated. For all the experiments, alpha is initialized to 0.5 for each view and the mixture weights in the output layer are independently initialized from a Gaussian distribution with 0 mean and 0.01 variance. For the K-nearest neighbor method, we set K = 1.