Learning Transformations for Clustering and Classification

Authors: Qiang Qiu, Guillermo Sapiro

JMLR 2015 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments using public data sets are presented, showing that the proposed approach significantly outperforms state-of-the-art methods for subspace clustering and classification.
Researcher Affiliation Academia Qiang Qiu EMAIL Department of Electrical and Computer Engineering Duke University Durham, NC 27708, USA; Guillermo Sapiro EMAIL Department of Electrical and Computer Engineering Department of Computer Science Department of Biomedical Engineering Duke University Durham, NC 27708, USA
Pseudocode Yes Algorithm 1: Learning a robust subspace clustering (LRSC) framework. Algorithm 2: The Concave-Convex Procedure (CCCP). Algorithm 3: An approach to evaluate a subgradient of matrix nuclear norm.
Open Source Code No The paper does not provide concrete access to source code for the methodology described. It mentions using third-party implementations for comparison (e.g., LSA, SSC, LBF) but does not state that their own code is released.
Open Datasets Yes This section first presents experimental evaluations on subspace clustering using three public data sets (standard benchmarks): the MNIST handwritten digit data set, the Extended Yale B face data set (Georghiades et al., 2001) and the Hopkins 155 database of motion segmentation. The Hopkins 155 database of motion segmentation, which is available at http://www.vision.jhu.edu/data/hopkins155, contains 155 video sequences along with extracted feature trajectories... This section then presents experimental evaluations on classification using two public face data sets: the CMU PIE data set (Sim et al., 2003) and the Extended Yale B data set.
Dataset Splits Yes We split the data set into two halves by randomly selecting 32 lighting conditions for training, and the other half for testing... In this experiment, we classify 68 subjects in three poses, frontal (c27), side (c05), and profile (c22), under lighting condition 12. We use the remaining poses as the training data... We use 68 subjects in 5 poses, c22, c37, c27, c11 and c34, under 21 illumination conditions for training; and classify 68 subjects in 4 poses, c02, c05, c29 and c14, under 21 illumination conditions.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments. It mentions using 'a GPU' in Appendix C, but without specification.
Software Dependencies No The paper mentions several algorithms and implementations used from other works (e.g., 'SSC (Elhamifar and Vidal, 2013)', 'LSA (Yan and Pollefeys, 2006)', 'LBF (Zhang et al., 2012)'), and that they adopt these implementations, but does not specify version numbers for these or any other software dependencies used in their own work.
Experiment Setup Yes We set the sparsity value K = 6 for R-SSC, and perform 100 iterations for the subgradient updates while learning the transformation on subspaces. The subgradient update step was ν = 0.02... We set the sparsity value K = 10 for R-SSC, and perform 100 iterations for the subgradient descent updates while learning the transformation... A sparsity value 10 is used here for OMP.