Neural Exploratory Landscape Analysis for Meta-Black-Box-Optimization

Authors: Zeyuan Ma, Jiacheng Chen, Hongshu Guo, Yue-Jiao Gong

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments show that Neur ELA achieves consistently superior performance when integrated into different and even unseen Meta BBO tasks and can be efficiently fine-tuned for further performance boost. This advancement marks a pivotal step in making Meta BBO algorithms more autonomous and broadly applicable.
Researcher Affiliation Academia 1South China University of Technology EMAIL
Pseudocode Yes Algorithm 1 Pseudo code of training Neur ELA
Open Source Code Yes The source code of Neur ELA can be accessed at https://github.com/GMC-DRL/Neur-ELA.
Open Datasets Yes Then we set the associated problem set Dk for these Meta BBO algorithms as the BBOB testsuites in COCO Hansen et al. (2021), which includes a variety of optimization problems with diverse landscape properties. We have to note that the selected algorithms cover diverse Meta BBO scenarios such as auto-configuration for control parameters and auto-selection for evolutionary operators, hence ensuring the generalization of our Neur ELA.
Dataset Splits Yes Specifically we visualize these 24 problems under 2D setting, and then select 12 representative problems into train set.
Hardware Specification Yes The experiments are run on a computation node of a Slurm CPU cluster, with two Intel Sapphire Rapids 6458Q CPUs and 256 GB memories.
Software Dependencies No Our codebase can be accessed at https://anonymous.4open.science/r/ Neur-ELA-303C. In Table 3 we listed several open-sourced assets used in our work and their corresponding licenses. Table 3: Used open-sourced tools and their licenses. Used scenario Asset License Top-level optimizer Py Pop7 Duan et al. (2022) GPL-3.0 license Meta BBO algorithms implementation Low-level train-test workflow Meta Box Ma et al. (2023) BSD-3-Clause license Parallel processing Ray Moritz et al. (2018) Apache-2.0 license ELA feature calculation pflacco Kerschke & Trautmann (2019b) MIT license. The provided text does not include specific version numbers for the listed software.
Experiment Setup Yes For the settings about the neural network, we set the hidden dimension h = 16 for the neural network modules in the landscape analyser Λθ. With a single-head attention in Attn, Λθ possess a total number of 3296 learnable parameters. We adopt Fast CMA-ES Li et al. (2018) as ES for its searching efficiency and robust optimization performance. During the training, we employ a population of N = 10 Λθs within a generation and optimize the population for max Gen = 50 generations. For each generation, we parallel the N K = 30 meta-training pipelines across 30 independent CPU cores using Ray Moritz et al. (2018). For each low-level optimization process, we set the maximum function evaluations as 20000. The experiments are run on a computation node of a Slurm CPU cluster, with two Intel Sapphire Rapids 6458Q CPUs and 256 GB memories. Due to the limitation of the space, we present more technical details such as the train-test split {Dtrain, Dtest}, the control parameters of Fast CMAES and the use of open-sourced software in Appendix A.2 A.5.