Variance-Reduced Forward-Reflected-Backward Splitting Methods for Nonmonotone Generalized Equations
Authors: Quoc Tran-Dinh
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We test our algorithms on some numerical examples and compare them with existing methods. The results demonstrate promising improvements offered by the new methods compared to their competitors. ... Section 5 presents two concrete numerical examples. ... Figures 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13 show plots of 'Relative operator norm' against 'Number of epochs' for various algorithms and datasets. |
| Researcher Affiliation | Academia | 1Department of Statistics and Operations Research, The University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA. Correspondence to: Quoc Tran-Dinh <EMAIL>. |
| Pseudocode | No | The paper describes the algorithms (VFR method, VFRBS method) in structured paragraph text, detailing the steps and equations, but does not present them in a clearly labeled 'Algorithm' or 'Pseudocode' block or figure. |
| Open Source Code | No | The paper states: 'All algorithms are implemented in Python, and all the experiments are run on a Mac Book Pro.' (Supplementary Document E). However, it does not provide any explicit statement about releasing the source code for the methodology described, nor does it include a link to a code repository. |
| Open Datasets | Yes | We test these algorithms on two real datasets: a9a (134 features and 3561 samples) and w8a (311 features and 45546 samples) downloaded from LIBSVM (Chang & Lin, 2011). ... We conduct two more experiments using the well-known MNIST dataset (n = 70000 and p = 780)... |
| Dataset Splits | No | The paper mentions running experiments on datasets like a9a, w8a, and MNIST, often using a 'mini-batch size b' and discussing '10 problem instances' or averaging results over runs. However, it does not explicitly describe how these datasets were split into training, validation, and test sets for the experiments. |
| Hardware Specification | Yes | All algorithms are implemented in Python, and all the experiments are run on a Mac Book Pro. 2.8GHz Quad-Core Intel Core I7, 16Gb Memory. |
| Software Dependencies | No | The paper states: 'All algorithms are implemented in Python' (Supplementary Document E). While it specifies the programming language, it does not provide specific version numbers for Python or any other libraries or frameworks used (e.g., NumPy, PyTorch, TensorFlow, etc.). |
| Experiment Setup | Yes | For the optimistic gradient algorithm (OG), we choose its learning rate η := 1/L... For our methods in (VFR)... use a larger learning rate η := 1/(2L) for all three variants, and choose a mini-batch of size b := 0.5n^(2/3), and a probability p := 1/n^(1/3)... For VFRBS, we choose η = 47.5(1 − 1/p)/(2L_hat) for a9a and η = 95(1 − 1/p)/(2L_hat) for w8a. For VEG, we select η = 47.5√(1 − α)/L_hat for a9a and η = 95√(1 − α)/L_hat for w8a with α := 1/p. We still choose the mini-batch size b and the probability p of updating the snapshot point wk in SVRG variants as b = 0.5n^(2/3) and p = n^(1/3), respectively for all the algorithms. |