Expensive Multi-Objective Bayesian Optimization Based on Diffusion Models
Authors: Bingdong Li, Zixiang Di, Yongfan Lu, Hong Qian, Feng Wang, Peng Yang, Ke Tang, Aimin Zhou
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on both synthetic and real-world problems demonstrates that CDM-PSL attains superior performance compared with state-of-the-art MOBO algorithms. ... Overall Performance. We conducted a series of experiments on a variety of widely recognized synthetic multi-objective benchmarks, including ZDT1-3 (Zitzler, Deb, and Thiele 2000) and DTLZ2-7 (Deb et al. 2005). |
| Researcher Affiliation | Academia | 1Shanghai Frontiers Science Center of Molecule Intelligent Syntheses, Shanghai Institute of AI for Education, and School of Computer Science and Technology, East China Normal University, Shanghai 200062, China 2Key Laboratory of Advanced Theory and Application in Statistics and Data Science, Ministry of Education 3School of Computer Science, Wuhan University, Wuhan 430072, China 4Department of Statistics and Data Science, Southern University of Science and Technology, Shenzhen 518055, China 5Guangdong Provincial Key Laboratory of Brain-Inspired Intelligent Computation, Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China |
| Pseudocode | Yes | Algorithm 1: The framework of MOBO with CDM-PSL ... Algorithm 2: Data Extraction ... Algorithm 3: Composite Diffusion Model based Generation |
| Open Source Code | Yes | Code https://github.com/ilog-ecnu/CDM-PSL |
| Open Datasets | Yes | experiments were conducted on 9 benchmark problems (2and 3-objective ZDT1-3 (Zitzler, Deb, and Thiele 2000) and DTLZ2-7 (Deb et al. 2005)) and 7 real-world problems (Tanabe and Ishibuchi 2020). |
| Dataset Splits | No | The paper does not provide specific training/test/validation dataset splits, as it focuses on Bayesian Optimization which iteratively evaluates expensive black-box functions rather than partitioning a static dataset. It describes initial solution generation and batch sizes for evaluation. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU types, or memory configurations used for experiments. |
| Software Dependencies | No | The paper mentions several algorithms and models (e.g., Gaussian processes, Diffusion Models) but does not specify particular software libraries or frameworks with version numbers (e.g., Python 3.x, PyTorch 1.x). |
| Experiment Setup | Yes | For fair comparison, the population size N was initialized to 100 for all the compared algorithms. Bayesian optimization algorithms were executed for 20 batches, each with a batch size of 5, across all algorithms. Each method was randomly run 10 times. For CDM-PSL, the hyperparameter t was set to 25, the number of CG N1 was 10 and number of UG N2 was 100, the batch size m was 1024, the learning rate γ was 0.001, with training spanning 4000 epochs. |