Decentralized Dictionary Learning Over Time-Varying Digraphs

Authors: Amir Daneshmand, Ying Sun, Gesualdo Scutari, Francisco Facchinei, Brian M. Sadler

JMLR 2019 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental This paper introduces, analyzes, and tests numerically the first provably convergent distributed method for a fairly general class of Dictionary Learning (DL) problems.
Researcher Affiliation Academia Amir Daneshmand EMAIL Ying Sun EMAIL Gesualdo Scutari EMAIL School of Industrial Engineering Purdue University West-Lafayette, IN, USA Francisco Facchinei EMAIL Department of Computer, Control, and Management Engineering University of Rome La Sapienza Rome, Italy Brian M. Sadler EMAIL U.S. Army Research Laboratory Adelphi, MD, USA
Pseudocode Yes Algorithm 1 : Decentralized Dictionary Learning over Dynamic Digraphs (D4L) Algorithm 2 : Prox-PDA-IP algorithm (Zhao et al., 2016)
Open Source Code No The paper mentions: "All codes are written in MATLAB 2016b". It also mentions using third-party packages like "KSVD-Box v13 package" and "MATLAB code provided by the authors" for comparative algorithms. However, there is no explicit statement or link indicating that the authors' own implementation code for the D4L method is publicly available.
Open Datasets Yes We consider denoising a 512 512 pixels image of a fishing boat (USC, 1997)... MIT-CBCL face database #1 (Sung, 1996): a pool of N = 2, 429 vectorized face images... The VOC 2006 database (Everingham et al., 2010): a pool of N = 10, 000 vectorized natural image patches... The genetic data is borrowed from (Lee et al., 2010)
Dataset Splits No The paper describes how the data is distributed among the nodes (agents) for the decentralized learning task (e.g., "The data matrix is equally distributed across the 150 nodes"). However, it does not provide specific train/test/validation splits in the traditional machine learning sense for model evaluation or reproduction of results.
Hardware Specification Yes All codes are written in MATLAB 2016b, and implemented on a computer with Intel Xeon (E5-1607 v3) quad-core 3.10GHz processor and 16.0 GB of DDR4 main memory.
Software Dependencies Yes All codes are written in MATLAB 2016b...
Experiment Setup Yes The free parameters λ, µ and α in (3) are set to λ = 1/s, µ = λ and α = 1, respectively. ... γν = γν 1(1 ϵγν 1), with γ0 = 0.5 and ϵ = 10 2; τ ν D,i = 10; and τ ν X,i = max(L Xi(Uν (i)), 1) [cf. (24)]. ...A diminishing step-size is used, set to γr = γr 1(1 ϵγr 1), where γ0 = 0.9, ϵ = 10 3, and r denotes the inner iteration index. A warm start is used for the subgradient algorithm: the initial points are set to Xν i , where ν is the iteration index of the outer loop.