Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]
Non-Convex Projected Gradient Descent for Generalized Low-Rank Tensor Regression
Authors: Han Chen, Garvesh Raskutti, Ming Yuan
JMLR 2019 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We supplement our theoretical results with simulations which show that, under several common settings of generalized low rank tensor regression, the projected gradient descent approach is superior both in terms of statistical error and run-time provided the step-sizes of the projected descent algorithm are suitably chosen. |
| Researcher Affiliation | Academia | Han Chen EMAIL Garvesh Raskutti EMAIL Department of Statistics University of Wisconsin Madison Madison, WI 53706, USA Ming Yuan EMAIL Department of Statistics Columbia University New York, NY 10027, USA |
| Pseudocode | Yes | Algorithm 1 Projected Gradient Descent 1: Input : data Y, X, parameter space Ī, iterations K, step size Ī· 2: Initialize : k = 0, b T0 Ī 3: for k = 1, 2, . . . , K do 4: gk = b Tk Ī· f( b Tk) (gradient step) 5: b Tk+1 = PĪ(gk) or b Tk+1 = b PĪ(gk) ((approximate) projection step) 7: Output : b TK |
| Open Source Code | No | The paper does not provide any explicit statement about releasing source code, a direct link to a code repository, or mention of code in supplementary materials. |
| Open Datasets | No | We ļ¬rst describe three diļ¬erent ways of generating random tensor coeļ¬cient T with diļ¬erent types of low tensor rank structure... Then we generate covariates {X(i)}n i=1 to be i.i.d random matrices ļ¬lled with i.i.d N(0, 1) entries. Finally, we simulate three GLM model, the Gaussian linear model, logistic regression and Poisson regression as follows. |
| Dataset Splits | No | The paper describes generating synthetic data for simulations but does not specify any train/test/validation splits; rather, new data is generated for each simulation run for comparison of methods. |
| Hardware Specification | No | The paper does not provide any specific hardware details like GPU/CPU models, memory amounts, or detailed computer specifications used for running the experiments. |
| Software Dependencies | No | The paper mentions using 'a generic cvx solver' in Section 1 and Section 5.3, but does not provide any specific version numbers for this solver or any other software dependencies. |
| Experiment Setup | Yes | In all our simualtions, the step-size Ī· is set as a constant speciļ¬ed in each plot. In the ļ¬rst two cases (see cases below), PGD with approximate projection b PĪ3(r ,r ,r ) were applied with diļ¬erent choices of (r , Ī·) while in the third case the PGD with exact projection PĪ2(r ,s ) were adopted with diļ¬erent choices of (r , s , Ī·). |