A Novel M-Estimator for Robust PCA
Authors: Teng Zhang, Gilad Lerman
JMLR 2014 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We compare our method with many other algorithms for robust PCA on synthetic and real data sets and demonstrate state-of-the-art speed and accuracy. Keywords: principal components analysis, robust statistics, M-estimator, iteratively re-weighted least squares, convex relaxation (...) 6. Numerical Experiments |
| Researcher Affiliation | Academia | Teng Zhang EMAIL Institute for Mathematics and its Applications University of Minnesota Minneapolis, MN 55455, USA Gilad Lerman EMAIL School of Mathematics University of Minnesota Minneapolis, MN 55455, USA |
| Pseudocode | Yes | Algorithm 1 The Geometric Median Subspace Algorithm (...) Algorithm 2 Practical and Regularized Minimization of (4) (...) Algorithm 3 The Extended Geometric Median Subspace Algorithm |
| Open Source Code | Yes | Supp. webpage: http://www.math.umn.edu/~lerman/gms. |
| Open Datasets | Yes | We used the images of the first two persons in the extended Yale face database B (Lee et al., 2005) (...) We consider the following two videos used by Cand es et al. (2011): Lobby in an office building with switching on / offlights and Shopping center from http://perception.i2r.a-star.edu.sg/bk_ model/bk_index.html. |
| Dataset Splits | No | The paper describes generating synthetic data with N1 inliers and N0 outliers, and using full real-world datasets (Yale Face database, video frames) for evaluation, but does not specify explicit training/test/validation splits or cross-validation setups for reproducing experiments. |
| Hardware Specification | Yes | We ran the experiments on a computer with Intel Core 2 CPU at 2.66GHz and 2 GB memory. (...) We used a computer with Intel Core 2 Quad Q6600 2.4GHz and 8 GB memory due to the large size of these data. |
| Software Dependencies | No | The paper lists specific codes and implementations of various algorithms (OP, HR-PCA, MKF, PCP, MDR, LLD) and states their own code will appear on a supplemental webpage, but it does not provide specific version numbers for programming languages, libraries, or other software dependencies. |
| Experiment Setup | Yes | We fix the regularization parameter to be smaller than the rounding error, that is, δ = 10 20 (...) For LLD, OP and PCP we set the mixture parameter λ as 0.8 p max(D, N) respectively (following the suggestions of Mc Coy and Tropp (2011) for LLD/OP and Cand es et al. (2011) for PCP). These choices of parameters are also used in experiments with real data sets in 6.7 and 6.8. For the common M-estimator, we used u(x) = 2 max(ln(x)/x, 1030) and the algorithm discussed by Kent and Tyler (1991). |