LiBOG: Lifelong Learning for Black-Box Optimizer Generation

Authors: Jiyuan Pei, Yi Mei, Jialin Liu, Mengjie Zhang

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate Li BOG s effectiveness in learning to generate high-performance optimizers in a lifelong learning manner, addressing catastrophic forgetting while maintaining plasticity to learn new tasks.
Researcher Affiliation Academia 1Victoria University of Wellington 2Lingnan University EMAIL, EMAIL, EMAIL, EMAIL
Pseudocode No A detailed pseudocode of Li BOG s learning process can be found in the supplementary material.
Open Source Code Yes Our code is available in https://github.com/Pei JY/Li BOG.
Open Datasets Yes The training dataset is constructed from the widely studied IEEE CEC Numerical Optimization Competition Benchmark [Mohamed et al., 2021].
Dataset Splits No The paper describes how problems are sampled from tasks and how task orders are generated (e.g., "We randomly generated three different task orders", "We sample 32 problems with the corresponding distribution" for testing), but does not provide explicit training/validation/test splits of a static dataset with percentages, counts, or predefined partition files.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, memory, or processor types used for running the experiments.
Software Dependencies No The paper mentions using a "long short-term memory (LSTM) network" and "Proximal policy optimization (PPO) [Schulman et al., 2017]" but does not specify version numbers for these or other software components.
Experiment Setup Yes For Li BOG, restart and fine-tuning, the models are trained on each task for 100 epochs equally. [...] For Li BOG, the values of α and β are set to 1 based on the rule of thumb. [...] The tested candidate values are {0.1, 1, 10} for both α and β.