Trading Off Quality and Uncertainty Through Multi-Objective Optimisation in Batch Bayesian Optimisation

Authors: Chao Jiang, Miqing Li

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through an extensive experiment study, we show the effectiveness of the proposed method in comparison with state-of-the-arts in the area. We evaluate our method by comparing it with twelve well-established methods on 14 synthetic and practical problems. Further experimental studies, such as ablation and parameter sensitivity, have been carried out to help understand the proposed method.
Researcher Affiliation Academia Chao Jiang and Miqing Li* School of Computer Science, University of Birmingham, Birmingham, United Kingdom EMAIL, EMAIL
Pseudocode Yes Algorithm 1 delineates the procedure of our proposed framework POEE.
Open Source Code Yes The code, data, and supplementary material are available at https://github.com/Chao Jiang52/ AAAI-POEE.
Open Datasets Yes We consider 14 well-known synthetic and real-world problems (e.g., robot pushing (Wang and Jegelka 2017)), following the practice in the related papers (De Ath et al. 2021; De Ath, Everson, and Fieldsend 2021). The detailed descriptions are provided in the supplementary material. The code, data, and supplementary material are available at https://github.com/Chao Jiang52/ AAAI-POEE.
Dataset Splits Yes The models were initially trained on 2d initial solutions generated by the Latin hypercube sampling (Stein 1987), with each optimisation run repeated 30 times with different initialisations. The same sets of initial batch solutions were same across all methods to enable statistical comparison. At each iteration in BO, before the selection of batch solutions, the hyperparameters of the Gaussian process were optimised by maximising the log likelihood via L-BFGS-B (Zhu et al. 1997). The methods were evaluated on the 14 synthetic and practical problems with batch sizes q {5, 10, 20} and a fixed budget of 300 function evaluations.
Hardware Specification No No specific hardware details (like GPU/CPU models or cloud instances) are mentioned in the paper for running experiments.
Software Dependencies No The paper mentions software like Bo Torch, GPy, L-BFGS-B, NSGA-II, and CMA-ES but does not provide specific version numbers for these dependencies, which are necessary for reproducible descriptions.
Experiment Setup Yes A zero-mean Gaussian process surrogate model with a Mat ern 5/2 kernel was used in all the experiments. The models were initially trained on 2d initial solutions generated by the Latin hypercube sampling (Stein 1987)... At each iteration in BO, before the selection of batch solutions, the hyperparameters of the Gaussian process were optimised by maximising the log likelihood via L-BFGS-B... In the proposed POEE method, we assigned the weights for exploitation and exploration as wexploit = 0.4 and wexplore = 0.6, respectively. For the methods utilising NSGA-II, including POEE, ϵSPF, AEGi S, MACE, and Gupta, parameters were set to commonly accepted values: a population size of 100, crossover and mutation probabilities of 1.0 and d 1 respectively, and distribution indices of 20 for both crossover and mutation. For each solution selected in LP and PLAy BOOK, we followed the authors guideline (Alvi et al. 2019) and uniformly sampled the acquisition function at 3000 locations, selecting the best solution after locally optimising (with L-BFGS-B) the best 5. For the other methods (excluding LP and PLAy BOOK), a maximum budget of 10000d acquisition function evaluations was used, as suggested by De Ath et al. (2020).