SGD with memory: fundamental properties and stochastic acceleration

Authors: Dmitry Yarotsky, Maksim Velikanov

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We propose a memory-1 algorithm with a time-dependent schedule that we show heuristically and experimentally to improve the exponent ξ of plain SGD. This conjecture was confirmed by experiments with MNIST and synthetic problems.
Researcher Affiliation Collaboration Dmitry Yarotsky Skoltech, Steklov Mathematical Institute EMAIL Maksim Velikanov Technology Innovation Institute, CMAP, Ecole Polytechnique EMAIL
Pseudocode No The paper describes algorithms using mathematical equations (e.g., Eq. 3 for General form of the algorithm) but does not include a clearly labeled pseudocode block or algorithm box.
Open Source Code No The paper does not provide an unambiguous statement of code release or a direct link to a source-code repository for the methodology described. It mentions 'pytorch/SGD' in the experiments section, but this refers to a third-party tool.
Open Datasets Yes For both synthetic Gaussian data (left) and MNIST classification with shallow Re LU network (right) (Figure 1). We design the MNIST experiment to be the opposite of the Gaussian data setting to test our results in more general scenario beyond quadratic problems with asymptotically power-law spectrum.
Dataset Splits No The paper describes the generation of 'Gaussian data' and the use of 'MNIST' but does not explicitly provide training/test/validation splits, percentages, or absolute sample counts for these datasets.
Hardware Specification No The paper does not explicitly describe the specific hardware (e.g., GPU models, CPU types) used to run its experiments.
Software Dependencies No The paper mentions 'Sympy (Meurer et al., 2017)' and 'pytorch/SGD' but does not provide specific version numbers for these or other software dependencies, which are necessary for reproducible descriptions.
Experiment Setup Yes For Gaussian data experiments, we generate inputs x N(0, Λ) from Gaussian distribution with diagonal covariance Λ = diag(λ1, λ2, . . . λM), leading to diagonal Hessian H = Λ. For the optimal parameters, we simply take the vector of target coefficients w = (c1, c2, . . . , c M). Then, we set ideal power-laws for λk = k ν and c2 k = k κ 1 which satisfy our asymptotic power-law conditions (1) with ζ = κ ν . For the experiments in fig. 1 and fig. 4, we pick spectral exponents ζ = 0.5, ν = 3. For the AM1 schedule exponent we use δ = 0.95, α = δ(1 - 1/ν). (Section M.1) model width taken as n = 1000. (Section M.2)