The Error-Feedback framework: SGD with Delayed Gradients

Authors: Sebastian U. Stich, Sai Praneeth Karimireddy

JMLR 2020 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We analyze (stochastic) gradient descent (SGD) with delayed updates on smooth quasiconvex and non-convex functions and derive concise, non-asymptotic, convergence rates. We show that the rate of convergence in all cases consists of two terms: (i) a stochastic term which is not affected by the delay, and (ii) a higher order deterministic term which is only linearly slowed down by the delay. Thus, in the presence of noise, the effects of the delay become negligible after a few iterations and the algorithm converges at the same optimal rate as standard SGD. This result extends a line of research that showed similar results in the asymptotic regime or for strongly-convex quadratic functions only. We further show similar results for SGD with more intricate form of delayed gradients compressed gradients under error compensation and for local SGD where multiple workers perform local steps before communicating with each other. In all of these settings, we improve upon the best known rates. These results show that SGD is robust to compressed and/or delayed stochastic gradient updates.
Researcher Affiliation Academia Sebastian U. Stich EMAIL EPFL, Lausanne, Switzerland Sai Praneeth Karimireddy EMAIL EPFL, Lausanne, Switzerland
Pseudocode No The paper defines algorithms like (SGD), (D-SGD), (EC-SGD) using mathematical equations within the text, e.g., 'xt+1 = xt γtgt , where gt = f(xt) + ξt , (SGD)', but does not present them in structured pseudocode blocks or algorithm environments.
Open Source Code No The paper includes a license statement: 'License: CC-BY 4.0, see https://creativecommons.org/licenses/by/4.0/. Attribution requirements are provided at http://jmlr.org/papers/v21/19-748.html.' This link leads to the journal's webpage for the paper, not a direct code repository. There is no explicit statement within the paper indicating that source code for the methodology is provided or made publicly available.
Open Datasets No The paper is theoretical, focusing on the analysis of stochastic gradient descent algorithms for various function types (quasi-convex, non-convex). It does not conduct empirical experiments using specific datasets. While it mentions 'Finite-sum optimization' and 'Least-squares' as settings covered by its assumptions, it does not provide concrete access information or use specific datasets for evaluation.
Dataset Splits No The paper is theoretical and does not perform experiments on specific datasets. Therefore, it does not mention any dataset splits (training, validation, test) or splitting methodologies.
Hardware Specification No This paper is theoretical, focusing on mathematical analysis and convergence rates of optimization algorithms. It does not describe any experiments that would require specific hardware specifications.
Software Dependencies No This paper is theoretical, focusing on mathematical analysis and convergence rates of optimization algorithms. It does not describe any experiments that would require specific software dependencies with version numbers.
Experiment Setup No This paper is purely theoretical, deriving convergence rates and complexity estimates for optimization algorithms. It focuses on the mathematical properties of algorithms and does not include details on experimental setups, hyperparameters, or training configurations for empirical evaluation.