Decentralized Sparse Linear Regression via Gradient-Tracking
Authors: Marie Maros, Gesualdo Scutari, Ying Sun, Guang Cheng
JMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | This section provides some numerical results that validate our theoretical findings. We consider the following problem setup. |
| Researcher Affiliation | Academia | Marie Maros EMAIL Wm Michael Barnes 64 Department of Industrial & Systems Engineering Texas A&M University College Station, TX 77843, USA Gesualdo Scutari EMAIL School of Industrial Engineering School of Electrical and Computer Engineering Purdue University West Lafayette, IN 47907, USA Ying Sun EMAIL School of Electrical Engineering and Computer Science The Pennsylvania State University University Park, PA 16802, USA Guang Cheng EMAIL Department of Statistics University of California Los Angeles Los Angeles, USA. |
| Pseudocode | No | The paper describes the algorithm steps using mathematical equations (3a) and (3b) within the text, but it does not present them in a clearly labeled 'Pseudocode' or 'Algorithm' block format. |
| Open Source Code | No | The paper does not explicitly state that source code for the methodology described is publicly available, nor does it provide a direct link to a code repository. |
| Open Datasets | Yes | We test the DGT on the Communities and Crime data set (UC Irvine Machine Learning Repository) where we have removed data points with missing attributes (covariates), and removed attributes corresponding to community number/name or zip-code (attributes 1 to 4 in the data set) yielding a regression problem with d = 123 and total sample size of 123 which we have split into Ntrain = 82 and Ntest=41. |
| Dataset Splits | Yes | We test the DGT on the Communities and Crime data set (UC Irvine Machine Learning Repository) ... yielding a regression problem with d = 123 and total sample size of 123 which we have split into Ntrain = 82 and Ntest=41. |
| Hardware Specification | No | The paper discusses numerical simulations and experiments but does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used to run these experiments. |
| Software Dependencies | No | The paper describes algorithms and their performance but does not specify any software dependencies with version numbers (e.g., programming language, libraries, or frameworks). |
| Experiment Setup | Yes | The training error is measured as 1 2Ntrain Xtrainθt ytrain 2 2 where θt are iterates of PGD generated by minimizing the function 1 2Ntrain Xtrainθ ytrain 2 over the set θ 1 0.85 using step-size 0.09. ... The right panel reports the performance of DGT with step-size set to be 0.05. |