Local Identification of Overcomplete Dictionaries

Authors: Karin Schnass

JMLR 2015 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental After showing that the optimization criterion in (4) is locally suitable for dictionary identification, in this section we present an iterative thresholding and K means type algorithm (ITKM) to actually find the local maxima of (4) and conduct some experiments to illustrate the theoretical results. ... We will now conduct four experiments to illustrate our theoretical findings.
Researcher Affiliation Academia Karin Schnass EMAIL Department of Mathematics University of Innsbruck Technikerstraße 13 6020 Innsbruck, Austria
Pseudocode No The paper describes the ITKM algorithm and its update rule (14) textually and mathematically: "ψnew k = λk X n:k I(Ψold,yn) sign( ψold k , yn )yn, (14), where λk is a scaling parameter ensuring that ψnew k 2 = 1." However, it does not provide a clearly labeled pseudocode or algorithm block with structured steps.
Open Source Code Yes A Matlab penknife (mini-toolbox) for playing around with ITKM and reproducing the experiments can be found at http://homepage.uibk.ac.at/~c7021041/ITKM.zip.
Open Datasets No Our signal model further depends on four coefficient parameters... Given these parameters we choose a decay factor cb uniformly at random... We then choose a permutation p and a sign sequence σ uniformly at random and set y = Φcp,σ, respectively y = (Φcp,σ + r)/ p 1 + r 2 where r is a Gaussian noise-vector with variance ρ2 if ρ > 0. (Table 1: Signal Model) In our first experiment we compare the local recovery error of ITKM and K-SVD for 3dimensional bases with increasing condition numbers... We generate N = 4096 approximately 1-sparse noiseless signals from the signal model described in Table 1.
Dataset Splits No The paper describes generating synthetic data for its experiments based on a specified signal model (Table 1), for example, "We generate N = 4096 approximately 1-sparse noiseless signals". It does not use or specify any training, validation, or test splits for a pre-existing dataset.
Hardware Specification No The paper does not provide specific details about the hardware used for running its experiments, such as GPU or CPU models. It focuses on the algorithmic complexity and experimental results.
Software Dependencies No The paper mentions a "Matlab penknife (mini-toolbox)" implying the use of Matlab in footnote 1. However, no specific version number for Matlab or any other software dependency is provided.
Experiment Setup Yes We generate N = 4096 approximately 1-sparse noiseless signals from the signal model described in Table 1 with S = 1, T = 2, ρ = 0 and b = 0.1/0.2 and run both ITKM and K-SVD with 1000 iterations, sparsity parameter S = 1 and the true dictionary (basis) as initialization. (Section 6.1) For every set of parameters d, S(T), b we generate N noiseless signals with N varying from 27 = 128 to 214 = 16384 and run ITKM with 1000 iterations, sparsity parameter S equal to the coefficient parameter S and the true dictionary as initialization. (Section 6.2)