Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]
Optimal Convergence Rates for Convex Distributed Optimization in Networks
Authors: Kevin Scaman, Francis Bach, Sébastien Bubeck, Yin Tat Lee, Laurent Massoulié
JMLR 2019 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | This work proposes a theoretical analysis of distributed optimization of convex functions using a network of computing units. We investigate this problem under two communication schemes (centralized and decentralized) and four classical regularity assumptions: Lipschitz continuity, strong convexity, smoothness, and a combination of strong convexity and smoothness. Under the decentralized communication scheme, we provide matching upper and lower bounds of complexity along with algorithms achieving this rate up to logarithmic constants. |
| Researcher Affiliation | Collaboration | Kevin Scaman EMAIL Huawei Noah s Ark Lab, Paris, France Francis Bach EMAIL INRIA, Ecole Normale Sup erieure, PSL Research University, Paris, France S ebastien Bubeck EMAIL Microsoft Research, Redmond, United States Yin Tat Lee EMAIL University of Washington, Seattle, United States Laurent Massouli e EMAIL MSR-INRIA Joint Center, Paris, France |
| Pseudocode | Yes | Algorithm 1 distributed randomized smoothing (convex case) Algorithm 2 distributed randomized smoothing (strongly-convex case) Algorithm 3 multi-step primal-dual algorithm |
| Open Source Code | No | The paper does not provide any explicit statement or link regarding the availability of source code for the described methodology. |
| Open Datasets | No | The paper is a theoretical analysis of optimization algorithms and does not involve experiments on specific datasets. Therefore, no information about open datasets is provided. |
| Dataset Splits | No | The paper is a theoretical analysis and does not describe any experimental setup involving datasets or their splits. |
| Hardware Specification | No | The paper focuses on theoretical analysis and algorithm design for distributed optimization; it does not describe any specific hardware used for running experiments. |
| Software Dependencies | No | The paper is theoretical and presents algorithms without discussing specific software implementations or their versioned dependencies. |
| Experiment Setup | No | The paper focuses on theoretical convergence rates and algorithm design. It does not include an experimental section with specific hyperparameters or system-level training settings. |