Simultaneous Pursuit of Sparseness and Rank Structures for Matrix Decomposition

Authors: Qi Yan, Jieping Ye, Xiaotong Shen

JMLR 2015 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Numerical examples are given to illustrate these aspects, in addition to an application to facial imagine recognition and multiple time series analysis. Keywords: blockwise decent, nonconvex minimization, matrix decomposition, structure pursuit
Researcher Affiliation Academia Qi Yan EMAIL School of Statistics University of Minnesota Minneapolis, MN 55414, USA Jieping Ye EMAIL Computer Science and Engineering Arizona State University Tempe, AZ 85287 USA Xiaotong Shen EMAIL School of Statistics University of Minnesota Minneapolis, MN 55414, USA
Pseudocode Yes Our algorithm for computing (4) is summarized. Step 1.(Initialization) Supply a good initial estimate ( ˆΘ(0) 1 , ˆΘ(0) 2 ) in (4). Specify precision δ > 0. Step 2.(Iteration) At iteration m, update ˆΘ(m) 2 in (7) with Θ1 = ˆΘ(m 1) 1 . Then update ˆΘ(m) 1 in (6) with Θ2 = ˆΘ(m) 2 . Step 3.(Stopping rule) Terminate if |f( ˆΘ(m) 1 , ˆΘ(m) 2 ) f( ˆΘ(m 1) 1 , ˆΘ(m 1) 2 )| δ, where f(Θ1, Θ2) = Θ1 + Θ2 Z 2 F . Let m be the index at termination. The estimate is then ( ˆΘ(m ) 1 , ˆΘ(m ) 2 ).
Open Source Code No In simulations, codes for ALM and Go Dec are used at the authors website, and the initial values for Algorithms 1 and 2 are set to be the zero-matrix
Open Datasets Yes For face image reconstruction, we use a subset of AR Face Data for this experiment. The original image is available at http://www-prima.inrialpes.fr/FGnet/data/05-ARFace/ markup_large.png This subsection examines multiple time series data described in (Stock & Watson, 2012).
Dataset Splits Yes The training, tuning and testing data sizes are n, 4n and 2n. A one-step ahead K-fold cross validation (CV) criterion is used for tuning the time series (Arlot & Celisse, 2010). For each method, the 10-fold cross-validation is employed.
Hardware Specification No The paper does not specify any particular hardware used for the experiments, such as specific CPU, GPU, or memory details. It only discusses the models and methods.
Software Dependencies No The paper mentions using 'codes for ALM and Go Dec' and implementing the 'FISTA algorithm', but does not provide specific version numbers for any software libraries, tools, or programming languages used.
Experiment Setup Yes The proposed method is trained with a training set, and the optimal tuning parameters, minimizing the prediction mean squares error over an independent tuning set, are obtained through a bisection search over integer values. For tuning, grid search is employed for Go Dec in (18), with 1 s1 (p + k) and 1 s2 min(p, k, 50); λ is fixed at 1 p for (19). For each method, grid search is performed for tuning, with s1 = (10, 20, 30, 50), 1 s2 min(p, k) = 26 and τ = (0.05, 0.1, 0.2). For tuning, the CV is optimized over a set of grids for s1 = (10, 20, 50, 100, 200), 1 s2 min(p, k) and τ = (0.02, 0.05, 0.1, 0.2).