Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]

DGD^2: A Linearly Convergent Distributed Algorithm For High-dimensional Statistical Recovery

Authors: Marie Maros, Gesualdo Scutari

NeurIPS 2022 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To demonstrate the effectiveness of DGD2, we conduct numerical experiments on various high-dimensional statistical recovery problems including group Lasso, sparse logistic regression, and sparse covariance matrix estimation.
Researcher Affiliation Collaboration Zheng Qu University of California, Berkeley EMAIL... Yang Shen Microsoft Research EMAIL... Jelena Kovacevic Carnegie Mellon University EMAIL... Mingyi Hong University of Minnesota EMAIL
Pseudocode Yes Algorithm 1: DGD2 for distributed statistical recovery
Open Source Code No The paper does not provide any explicit statements about releasing source code or links to a code repository for the described methodology.
Open Datasets Yes For sparse logistic regression, we downloaded the RCV1 dataset from LIBSVM Data [37]. For sparse covariance matrix estimation, we applied DGD2 to the Million Song Dataset, which is obtained from the UCI Machine Learning Repository.
Dataset Splits No The paper does not provide specific details on training, validation, and test dataset splits (e.g., percentages, sample counts, or explicit splitting methodology) needed for reproduction.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies Yes All simulations are implemented in MATLAB R2018a.
Experiment Setup Yes Unless otherwise specified, for all experiments, the stepsize is set to γk = 0.001 and the regularization parameter λ = 0.001. The maximum iteration number is 5000.