B2Opt: Learning to Optimize Black-box Optimization with Little Budget

Authors: Xiaobin Li, Kai Wu, Xiaoyu Zhang, Handing Wang

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental B2Opt undergoes rigorous testing on six standard functions, BBOB challenges (Finck et al. 2010) in high dimensions, neural network training, and the planar mechanical arm problem (Wang et al. 2022). The experimental results position B2Opt at the forefront, showcasing its superior performance compared to five leading EA baselines,
Researcher Affiliation Academia 1School of Artificial Intelligence, Xidian University 2School of Cyber Engineering, Xidian University EMAIL, EMAIL, EMAIL, EMAIL
Pseudocode Yes The pseudocode of the training phase is described in Appendix D Algorithm 1. A detailed introduction is given in Appendix D Algorithm 2.
Open Source Code No The paper does not contain any explicit statements about code release or links to repositories for its implementation.
Open Datasets Yes B2Opt undergoes rigorous testing on six standard functions, BBOB challenges (Finck et al. 2010) in high dimensions, neural network training, and the planar mechanical arm problem (Wang et al. 2022)... We further analyze the performance of B2Opt in the field of neuroevolution. We evaluate the performance of training a convolutional neural network (Howard et al. 2017) using B2Opt on the MNIST classification task.
Dataset Splits Yes We select 25%, 50%, 75%, and 100% data from the training set for training, respectively, which constitute surrogate problems with different levels of fidelity.
Hardware Specification No The paper does not provide specific details about the hardware used, such as GPU or CPU models, or memory specifications.
Software Dependencies No The paper mentions using the Adam method for training but does not specify any software versions for libraries or programming languages.
Experiment Setup Yes We design three B2Opt models, including 3 OBs with WS, 5 OBs without WS, and 30 OBs with WS. 3 OBs with WS represents that B2Opt has 3 OB modules, and these OBs share the weights of each other. Each OB consists of 1 SAC, 1 FM, and 1 RSSM. In general, B2Opt represents 30 OBs with WS. We employ Adam method with a minibatch Ωto train B2Opt upon training dataset. We analyze the effect of the deep structure, learning rate, and weight sharing between OBs on B2Opt.