Sample-and-Bound for Non-convex Optimization

Authors: Yaoguang Zhai, Zhizhen Qin, Sicun Gao

AAAI 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the proposed algorithms on high-dimensional nonconvex optimization benchmarks against competitive baselines and analyze the effects of the hyper parameters.
Researcher Affiliation Academia University of California, San Diego EMAIL, EMAIL, EMAIL
Pseudocode Yes The pseudocode of MCIR is provided in Alg. 1
Open Source Code No No explicit statement or link to open-source code for the described methodology was found.
Open Datasets Yes To evaluate the performance of our algorithms, our benchmark sets include three distinct categories: synthetic functions designed for nonlinear optimization, bound-constrained non-convex global optimization problems derived from real-world scenarios, and neural networks fitted for single valued functions. [...] Synthetic functions are widely-used in nonlinear optimization benchmarks (Lavezzi, Guye, and Ciarci a 2022). These functions usually have numerous local minima, valleys, and ridges in their landscapes which is hard for normal optimization algorithms. In our tests, we choose three functions: Levy, Ackley, and Michalewicz [...] For our evaluation of non-convex global optimization problems in various fields, we select bound-constrained problems from the collection presented in (The Optimization Firm 2023; Puranik and Sahinidis 2017) that do not involve any additional inequality or equality constraints.
Dataset Splits No The paper uses benchmark functions but does not explicitly provide specific train/validation/test dataset splits, percentages, or sample counts.
Hardware Specification Yes We conduct our experiments on a local machine with Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz, 16G RAM, and NVIDIA Ge Force GTX 1080 graphic card.
Software Dependencies Yes Gurobi (Gurobi Optimization 2023) is a widely used commercial optimization solver [...] CMA-ES/pycma: r3.3.0
Experiment Setup Yes In this formula, Clb, Cv and Cx are weights for the function s lower bound, the volume of the box, and visitationbased exploration, respectively. [...] In most cases we cap the number of iterations at fewer than 50, as we do not want to overemphasize the choice of the local optimizer.