An Enhanced Levenberg--Marquardt Method via Gram Reduction

Authors: Chengchang Liu, Luo Luo, John C.S. Lui

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments are conducted on real-world applications in scientific computing and machine learning, which validate the efficiency of our method. In Section 5, we validate our methods by numerical experiments.
Researcher Affiliation Academia 1Department of Computer Science and Engineering, The Chinese University of Hong Kong 2School of Data Science, Fudan University 3Shanghai Key Laboratory for Contemporary Applied Mathematics EMAIL, EMAIL, EMAIL
Pseudocode Yes Algorithm 1: Gram-Reduced Levenberg Marquardt (GRLM)
Open Source Code No The paper does not provide an explicit statement about releasing code or a direct link to a code repository for the methodology described. It mentions a third-party library (LIBSVM) that was used, but not the authors' own implementation code.
Open Datasets Yes All of these data sets can be downloaded from the LIBSVM repository (Chang and Lin 2011).
Dataset Splits No The paper mentions using 'a1a', 'w1a', and 'splice' datasets, along with their sizes, but does not specify how these datasets were split into training, validation, or test sets for the experiments.
Hardware Specification Yes Our experiments are conducted on a PC with Apple M1 and all algorithms are implemented in Python 3.8.12.
Software Dependencies Yes all algorithms are implemented in Python 3.8.12.
Experiment Setup Yes For all experiments, we tuned the step size η for GD from {0.1, 0.2, ..., 1}. We tune the regularized parameter c in LM and GRLM from {1, 10, 100, 1000}. In all cases, we set c = 1e-10. We randomize an x0 as the initial points for all the methods. We choose m = {1, 50, 100, 500} for all cases and present the results in Figure 2. We compare GRLM (m = 100) with baselines in three real-world datasets.