Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]
On the Influence of Momentum Acceleration on Online Learning
Authors: Kun Yuan, Bicheng Ying, Ali H. Sayed
JMLR 2016 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | From simulations, the equivalence between momentum and standard stochastic gradient methods is also observed for non-differentiable and non-convex problems. ... In this section we illustrate the main conclusions by means of computer simulations for both cases of mean-square-error designs and logistic regression designs. |
| Researcher Affiliation | Academia | Kun Yuan EMAIL Bicheng Ying EMAIL Ali H. Sayed EMAIL Department of Electrical Engineering University of California Los Angeles, CA 90095, USA |
| Pseudocode | No | The paper describes algorithms using mathematical recursions like (2) and (22)-(23) but does not present them in structured pseudocode blocks or figures, nor are there any sections explicitly labeled 'Pseudocode' or 'Algorithm'. |
| Open Source Code | No | The paper does not contain any explicit statement about releasing source code for the methodology described, nor does it provide a link to a code repository. |
| Open Datasets | Yes | In Section 7.4 'Visual Recognition', the paper states: 'We employ the CIFAR-10 database'. In Section 7.2 'Regularized Logistic Regression', it mentions 'a benchmark data set the Adult Data Set' and provides URLs: 'https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/ or http://archive.ics.uci.edu/ml/datasets/Adult'. |
| Dataset Splits | Yes | In Section 7.2, for the Adult Data Set, it states: 'The set is divided into 6414 training data and 26147 test data'. In Section 7.4, for CIFAR-10, it states: 'There are 50000 training images and 10000 test images'. |
| Hardware Specification | No | The paper mentions 'computer simulations' and 'mini-batch stochastic-gradient learning' but does not provide specific details about the hardware (e.g., GPU models, CPU types, or memory) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names with version numbers) needed to replicate the experiment. |
| Experiment Setup | Yes | Section 7.1 details for LMS: 'µ = µm = 0.003. The momentum parameter β is set as 0.9. ... µm = µ(1 β) = 0.0003.' Section 7.2 for Logistic Regression: 'µ = µm = 0.005. The momentum parameter β is set to 0.9. ... µm = µ(1 β) = 0.0005.' Section 7.4 for Neural Networks provides: 'ℓ2 regularization term is set to 0.001, initial value w-1 is generated by a Gaussian distribution with 0.05 standard deviation... batch size equal to 100... momentum parameter is set to β = 0.9, and the initial step-size µm is set to 0.01... reduce µm to 0.95µm after every epoch.' Similar details are given for the convolutional neural network, including batch size, step-size, and momentum parameters. |