High-dimensional quantile tensor regression

Authors: Wenqi Lu, Zhongyi Zhu, Heng Lian

JMLR 2020 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The numerical performances are demonstrated via simulations and an application to a crowd density estimation problem. In Section 4, we investigate the finite sample properties via simula-tion studies, and Section 5 presents an application of the proposed method in crowd density estimation problem.
Researcher Affiliation Academia Wenqi Lu EMAIL School of Management Fudan University Shanghai, China and Department of Mathematics City University of Hong Kong Hong Kong, China Zhongyi Zhu EMAIL School of Management Fudan University Shanghai, China Heng Lian EMAIL Department of Mathematics City University of Hong Kong Hong Kong, China
Pseudocode Yes Algorithm 1 Alternating update algorithm for low dimensional quantile regression Algorithm 2 Alternating update algorithm for high dimensional quantile regression Algorithm 3 ADMM for sparse and orthogonal regression
Open Source Code Yes Our implementation of the algorithm based on R can be obtained from https://github.com/Wenqi Lu/Quantile Tensor Reg.
Open Datasets Yes In this section, we show the effectiveness of the proposed method by performing experiments on PETS 2009 dataset (http://www.cvg.reading.ac.uk/PETS2009).
Dataset Splits Yes We fit a quantile regression model using 662 images from timestamps 13-57, 13-59 and the first 200 images with timestamp 14-03. The remaining 213 images of 14-03 are used as the test set.
Hardware Specification No The paper does not explicitly describe the specific hardware (CPU, GPU models, etc.) used for running its experiments. It mentions that the TensorLy library allows models to be run on multiple CPU and GPU machines but does not state these were used for their experiments.
Software Dependencies No The paper mentions that its implementation is
Experiment Setup Yes To select the ranks in the proposed method, we use the following BIC BIC = log 1 n Pn i=1 ρτ(yi b A, X ) + df log(n)/n where the degrees of freedom (df) is defined as df = r1r2 r K + PK k=1 rk(pk rk). For this experiment, we take ranks (r1, r2, r3) = (2, 2, 2), and consider larger dimensions (p1, p2, p3) = (10, 10, 10), (20, 20, 20) and (100, 100, 3) and smaller sample sizes n = 200, 400, 800. Ranks are chosen from all combinations of ranks up to 4 (except r3 is of course bounded by 3). Since the dimension of mode-3 is small, we only impose penalty on factor matrices U1 and U2. The tuning parameter and ranks were set by 10-fold cross-validation, resulting in low-rank estimators with rank (3, 3, 3).