Output Space Entropy Search Framework for Multi-Objective Bayesian Optimization

Authors: Syrine Belakaria, Aryan Deshwal, Janardhan Rao Doppa

JAIR 2021 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on diverse synthetic and real-world benchmarks show that our OSE search based algorithms improve over state-of-the-art methods in terms of both computational-efficiency and accuracy of MOO solutions.
Researcher Affiliation Academia Syrine Belakaria EMAIL Aryan Deshwal EMAIL Janardhan Rao Doppa EMAIL School of Electrical Engineering and Computer Science Washington State University Pullman, Washington 99163, USA
Pseudocode Yes Algorithm 1 MESMO Algorithm Algorithm 2 MESMOC Algorithm Algorithm 3 MF-OSEMO Algorithm Algorithm 4 i MOCA Algorithm Algorithm 5 Naive-CFMO Algorithm
Open Source Code Yes Open-source code for all methods: MESMO1, MESMOC2, MF-OSEMO3, and i MOCA4 1. github.com/belakaria/MESMO 2. github.com/belakaria/MESMOC 3. github.com/belakaria/MF-OSEMO 4. github.com/belakaria/i MOCA
Open Datasets Yes We construct two problems using a combination of benchmark functions for continuous-fidelity and single-objective optimization (Surjanovic & Bingham, 2020): Branin,Currin (with K=2, d=2) and Ackley, Rosen, Sphere (with K=3, d=5). To show the effectiveness of i MOCA on settings with discrete fidelities, we employ two of the known general MO benchmarks: QV (with K=2, d=8) and DTLZ1 (with K=6, d=5) (Habib, Singh, & et al., 2019; Shu, Jiang, Zhou, Shao, Hu, & Meng, 2018).
Dataset Splits No The paper does not explicitly provide training/test/validation dataset splits, only stating that surrogate models are initialized with randomly selected points and discussing how hyper-parameters are estimated periodically.
Hardware Specification Yes We performed all experiments on a machine with the following configuration: Intel i7-7700K CPU @ 4.20GHz with 8 cores and 32 GB memory.
Software Dependencies No The paper mentions "Platypus library" and "Spearmint" but does not specify their version numbers. It refers to "Python" but without a version.
Experiment Setup Yes The hyper-parameters are estimated after every five function evaluations (BO iterations) for MESMO and MESMOC. For i MOCA and MF-OSEMO, the number of evaluations would be higher due to the low cost of lower fidelities. Therefore, the hyper-parameters are estimated every twenty iterations. During the computation of Pareto front samples, we solve a cheap MO optimization problem over sampled functions using NSGA-II. We use Platypus10 library for the implementation. For NSGA-II, the most important parameter is the number of function calls. We experimented with several values. We noticed that increasing this number does not result in any performance improvement for our algorithms. Therefore, we fixed it to 1500 for all our experiments.