Accelerated Over-Relaxation Heavy-Ball Method: Achieving Global Accelerated Convergence with Broad Generalization

Authors: Jingrong Wei, Long Chen

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we evaluate the performance of our AOR-HB methods on a suite of optimization problems. All numerical experiments were conducted using MATLAB R2022a on a desktop computer equipped with an Intel Core i7-6800K CPU operating at 3.4 GHz and 32GB of RAM. We compare the results against several state-of-the-art optimization algorithms from the literature.
Researcher Affiliation Academia Jingrong Wei Department of Mathematics University of California, Irvine Irvine, CA 92697 EMAIL Long Chen Department of Mathematics University of California, Irvine Irvine, CA 92697 EMAIL
Pseudocode Yes Algorithm 1 Accelerated Over-Relaxation Heavy-Ball Method (AOR-HB) Algorithm 2 Accelerated Over-Relaxation Heavy-Ball Method for Convex Composite Minimization (AOR-HB-composite) Algorithm 3 Accelerated over-relaxation Heavy-ball method for strongly-convex-strongly-concave saddle point problems with bilinear coupling (AOR-HB-saddle) Algorithm 4 AOR-HB-saddle-I Algorithm
Open Source Code No No explicit statement or link for the authors' code is provided. The text mentions: "We also thank Dr. Thekumparampil for generously providing the code for LPD.", but this refers to a third-party method, not the authors' own.
Open Datasets No The paper does not provide concrete access information (link, DOI, repository, formal citation) for publicly available datasets. For smooth convex minimization, it states: "We randomly generate the components of A and b from the normal distribution". For logistic regression: "The data ai and bi are generated by the normal distribution and Bernoulli distribution, respectively.". For Lasso problem: "We generate the matrix A with size 1024 256 from Gaussian random matrices". For saddle point problems: "We generate random matrices B and C".
Dataset Splits No The paper describes generating data for its experiments rather than using predefined datasets with explicit splits. It mentions "randomly generate the components" or "data... are generated" for various problems, but no specific training/validation/test splits are provided.
Hardware Specification Yes All numerical experiments were conducted using MATLAB R2022a on a desktop computer equipped with an Intel Core i7-6800K CPU operating at 3.4 GHz and 32GB of RAM.
Software Dependencies Yes All numerical experiments were conducted using MATLAB R2022a
Experiment Setup Yes First, we test the algorithms using smooth multidimensional piecewise objective functions borrowed from Van Scoy et al. (2017). Let ... We set µ = 1, L = 104, d = 100, p = 5 and r = 10 6. Next, we report the numerical simulations on a logistic regression problem with ℓ2 regularizer: ... We set λ = 0.1, d = 1000, and n = 50. We first consider the Lasso problem: ... We set λ = 0.8 and use the step size α = 1/L in FISTA and APG. We consider policy evaluation problems in reinforcement learning ... In this example, µf = Lf = 1, µg = λmin(C) and Lg = λmax(C). We set m = 2500 and n = 50.