Complete Dictionary Learning via L4-Norm Maximization over the Orthogonal Group

Authors: Yuexiang Zhai, Zitong Yang, Zhenyu Liao, John Wright, Yi Ma

JMLR 2020 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In addition to strong theoretical guarantees, experiments show that the new algorithm is significantly more efficient and effective than existing methods, including KSVD and ℓ1-based methods. Preliminary experimental results on mixed real imagery data clearly demonstrate advantages of so learned dictionary over classic PCA bases. ... Extensive simulations suggest that the MSP algorithm converges globally to the correct solution under broad conditions. ... In Section 5, we conduct extensive experiments to show effectiveness and efficiency of our method, by comparing with the state of the art.
Researcher Affiliation Collaboration Department of Electrical Engineering and Computer Science University of California, Berkeley, CA 94720-1770 Kuaishou Technology. Department of Electrical Engineering Columbia University, New York, NY, 10027
Pseudocode Yes Algorithm 1 MSP for ℓ4-Maximization over O(n; R) ... Algorithm 2 MSP for ℓ4-Maximization based Dictionary Learning
Open Source Code No The paper does not provide a direct link to a source-code repository or an explicit statement about the release of their implementation code for the methodology described. It only provides a license link for the paper itself and discusses other third-party tools.
Open Datasets Yes we test our algorithm on the MNIST dataset of hand-written digits (Le Cun et al., 1998) ... can be extend to other large scale visual dataset such as CIFAR10 (Krizhevsky et al., 2014; Zhai et al., 2019).
Dataset Splits No The paper mentions using 'the MNIST dataset of hand-written digits' and specifically 'The training set of MNIST contains 50,000 images of size 28x28'. For synthetic data, parameters like 'p' (sample size) are varied. However, it does not explicitly provide percentages or absolute counts for training, validation, and test splits for evaluating their dictionary learning method, nor does it refer to specific predefined splits for such evaluation beyond using the MNIST training set as input.
Hardware Specification Yes All experiments are averaged among 5 trials and conducted on a 2.7 GHz Intel Core i5 processor (CPU of a 13-inch Mac Pro 2015).
Software Dependencies No The paper does not provide specific version numbers for any ancillary software, libraries, or programming languages used for their implementation. It mentions other algorithms/packages like KSVD, SPAMS, and subgradient method, but not their own software dependencies.
Experiment Setup Yes Figure 5(a) presents one trial of the proposed MSP Algorithm 2 for dictionary learning with θ = 0.3, n = 50, and p = 20, 000. ... Table 2: Comparison experiments with KSVD... (a) n = 25, p = 1 * 10^4, θ = 0.3; (b) n = 50, p = 2 * 10^4, θ = 0.3; (c) n = 100, p = 4 * 10^4, θ = 0.3; (d) n = 200, p = 4 * 10^4, θ = 0.3; (e) n = 400, p = 16 * 10^4, θ = 0.3.