Bayesian time-aligned factor analysis of paired multivariate time series

Authors: Arkaprava Roy, Jana Schaich Borg, David B Dunson

JMLR 2021 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show excellent performance in simulations, and illustrate the method through application to a social mimicry experiment. Keywords: CIFA; Dynamic factor model; Hamiltonian Monte Carlo; JIVE; Monotonicity; Paired time series; Social mimicry; Time alignment; Warping. We run two simulations to evaluate the performance of TACIFA on pairs of multivariate time series. We evaluate TACIFA by: (1) ability to retrieve the appropriate number of shared and individual factors, (2) accuracy of the estimated warping functions and accompanying uncertainty quantification, (3) out of sample prediction errors, and (4) performance relative to two-stage approaches for estimating shared and individual-specific dynamic factors. In the first simulation, we generate data from the proposed model. In the second simulation, we analyze two shapes changing over time, data that does not have any inherent connection to our proposed model.
Researcher Affiliation Academia Arkaprava Roy EMAIL Department of Biostatistics University of Florida Gainnesville, FL 32611, USA Jana Schaich Borg EMAIL Social Science Research Institute Duke University Durham, NC 27708-0251, USA David B Dunson EMAIL Department of Statistical Science Duke University Durham, NC 27708-0251, USA
Pseudocode No The paper describes methods like Hamiltonian Monte Carlo (HMC) and Gibbs updates, and discusses algorithmic details in the supplementary materials, but it does not present any explicitly labeled pseudocode or algorithm blocks within the main text.
Open Source Code No The paper mentions using "Open Face software (Baltrusaitis et al., 2018)" as a tool. However, there is no explicit statement about releasing the source code for the TACIFA methodology developed in this paper, nor any link to a code repository.
Open Datasets No The paper states: "We apply TACIFA to data from a simple social interaction in which one participant was instructed to imitate the head movements of another." and "We apply TACIFA to the time courses of 20 facial features from around the mouth and chin, along with three predictors of head position.". While a social mimicry experiment is described, there is no concrete access information (link, DOI, repository, or formal citation with authors/year) for the dataset used in this application.
Dataset Splits Yes To assess out of sample prediction error, we randomly assign 90% of the time-points to the training set and the remaining 10% to the test set. Thus, the training set contains a randomly selected 90% of the columns of the data and the remaining 10% columns will be in the test set.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, processor types, memory amounts, or detailed computer specifications) used for running its experiments. It mentions the experiment occurred over Skype but this is not related to the computational hardware.
Software Dependencies No The paper mentions using "Open Face software (Baltrusaitis et al., 2018)" and the "R function poly". However, no specific version numbers are provided for these or any other software components, which is necessary for reproducibility.
Experiment Setup Yes The choices of hyper parameters are ω = 100, αi1 = αi2 = 5 for i = 1, 2. We set K1 = K2 = J = K and fit the model for 4 different choices of K = 6, 8, 10, 12. The hyperparameters of the inverse gamma priors for the variance components are all 0.1 which is weakly informative. We collect 6000 MCMC samples and consider the last 3000 as post burn-in samples for inferences. We start the MCMC chain setting the number of shared latent factors r = p as a very conservative upper bound. We keep the leapfrog step fixed at 30. We tune the step size parameter to maintain an acceptance rate within the range of 0.6 to 0.8. We do this adjustment after every 100 iterations.