VFOSA: Variance-Reduced Fast Operator Splitting Algorithms for Generalized Equations

Authors: Quoc Tran-Dinh

JMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we validate our results through numerical experiments and compare their performance with existing methods.
Researcher Affiliation Academia Quoc Tran-Dinh EMAIL Department of Statistics and Operations Research The University of North Carolina at Chapel Hill
Pseudocode Yes Algorithm 1 (Variance-reduced Fast [FB] Operator Splitting Algorithm (VFOSA+)) ... Algorithm 2 (Variance-reduced Fast [BF] Operator Splitting Algorithm (VFOSA ))
Open Source Code No The paper states: "All the algorithms are implemented in Python and executed on a Mac Book Pro with Apple M4 processor and 24Gb of memory." However, it does not explicitly state that the *code for the methods described in the paper* is open-source, nor does it provide a link to a repository. Therefore, concrete access to source code is not provided.
Open Datasets Yes We use 4 different real datasets from LIBSVM (Chang and Lin, 2011): gisette (5,000 features and 6,000 samples), w8a (300 features and 49,749 samples), a9a (123 features and 32,561 samples), and mnist (784 features and 60,000 samples).
Dataset Splits No The paper mentions using datasets from LIBSVM, but it does not specify any particular training/test/validation splits (e.g., percentages, sample counts, or references to predefined splits) used for these datasets in the experimental setup.
Hardware Specification Yes All the algorithms are implemented in Python and executed on a Mac Book Pro with Apple M4 processor and 24Gb of memory.
Software Dependencies No The paper states that the algorithms are "implemented in Python", but it does not provide any specific version numbers for Python or any other key software libraries or dependencies used in the implementation.
Experiment Setup Yes We choose µ := 0.95 2/3 and r := 2 + 1/µ for all variants of our methods. For the L-SVRG variants, we choose pk = 1/(2n^1/3) and b := n^2/3/2... For the SAGA variants, we choose b := n^2/3/2... For the L-SARAH variants, we choose pk := 1/(2√n) and b := n/2... For the Hybrid-SGD variants, we choose θ := 1/n and b := n/2... For Vr Halpern, we choose pk := 1/(2√n) and b := n/2... We choose the initial point x0 := 0.25 randn(p) in all methods, and run each algorithm for Ne := 200 epochs. ...we choose λ := 5 * 10^-3.