Bayesian Tensor Regression

Authors: Rajarshi Guhaniyogi, Shaan Qamar, David B. Dunson

JMLR 2017 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Simulation studies illustrate substantial gains over existing tensor regression methods in terms of estimation and parameter inference. Our approach is further illustrated in a neuroimaging application. Various simulation studies with 2D and 3D tensor predictors are presented in Sections 6 and 7 respectively to study effectiveness of the Bayesian tensor regression under various degrees of sparsity and signal strength. Section 8 is devoted to a real brain connectome data analysis using the proposed Bayesian tensor regression model along with its competitors.
Researcher Affiliation Collaboration Rajarshi Guhaniyogi EMAIL Department of Applied Mathematics & Statistics University of California Santa Cruz, CA 95064, USA. Shaan Qamar EMAIL Google Inc. Mountain View, CA 94043, USA. David B. Dunson EMAIL Department of Statistical Science Duke University Durham, NC 27708-0251, USA.
Pseudocode Yes 5.1 Posterior Computation The proposed multiway prior (9) leads to Gibbs sampling scheme for most parameters of the tensor regression model (12). We rely on marginalization and blocking to reduce autocorrelation for β(r) j , wjr; 1 j D, 1 r R , (Φ, τ), (γ, σ) , drawing in sequence from [α, Φ, τ|B, W], [B, W|Φ, τ, γ, σ, y] and [γ, σ|B, y] as follows: (1) Sample [α, Φ, τ|B, W ] compositionally... (2) Sample from (β(r) j , wjr, λjr)... (3) Sample [γ, σ|B, y]...
Open Source Code No There, the total runtime using non-optimized R code on an x86 64 Intel(R) Core(TM) i7-3770 is between 6.2 7.5 hours. The Tensor Reg toolbox used for FTR (Zhou et al., 2013) requires calibrating tuning parameter values specific to each setting. The paper does not provide an explicit statement or link for their own code.
Open Datasets Yes Section 8 is devoted to a real brain connectome data analysis using the proposed Bayesian tensor regression model along with its competitors. The authors would like to thank Joshua T. Vogelstein from Johns Hopkins university for gracefully allowing us to utilize their brain connectome dataset in the real data analysis of our proposed BTR method. We analyze data containing 3D MRI images for 550 adolescents...
Dataset Splits Yes To assess the predictive performance, the sample of n = 109 individuals are divided into 10 folds. Both vectorized lasso and BTR are fitted on 9 folds as training data and the remaining fold as the hold out sample. This is carried out for each of the 10 folds and predictive inferences are obtained for both vectorized lasso and BTR.
Hardware Specification Yes There, the total runtime using non-optimized R code on an x86 64 Intel(R) Core(TM) i7-3770 is between 6.2 7.5 hours.
Software Dependencies No There, the total runtime using non-optimized R code on an x86 64 Intel(R) Core(TM) i7-3770 is between 6.2 7.5 hours. The Tensor Reg toolbox used for FTR (Zhou et al., 2013) requires calibrating tuning parameter values specific to each setting. No specific version numbers are provided for these software components.
Experiment Setup Yes By default, BTR uses R = 10 as an upper bound on the tensor parafac rank... MCMC for BTR was run for 1300 iterations, with a 300 iteration burn-in and remaining samples thinned by 5. In simulation studies, the tuning parameter in FTR is selected over a grid of values to minimize RMSE for the tensor predictor. From Lemma 5, setting aλ = 3 and bλ = 2D aλ avoids overly narrow variance of the induced prior on tensor elements, Bi1,...,i D.