Non-Myopic Multi-Objective Bayesian Optimization
Authors: Syrine Belakaria, Alaleh Ahmadian, Barbara E Engelhardt, Stefano Ermon, Jana Doppa
TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments on multiple diverse real-world MO problems demonstrate that our non-myopic AFs substantially improve performance over the existing myopic AFs for MOBO. |
| Researcher Affiliation | Academia | Syrine Belakaria* EMAIL Stanford University Alaleh Ahmadianshalchi* EMAIL Washington State University Barbara E Engelhardt EMAIL Stanford University Stefano Ermon EMAIL Stanford University Janardhan Rao Doppa EMAIL Washington State University |
| Pseudocode | Yes | Algorithm 1 NMMO Algorithm Input: input space X; K blackbox objective functions f1(x), f2(x), , f K(x); and maximum no. of iterations T, selected non-myopic method {NMMO-Joint, NMMO-Nested, BINOM}. 1: Initialize GP models GP1, GP2, , GPK by evaluating at N0 initial points 2: for each iteration t = 1 to T do 3: if method==NMMO-Nested then 4: Select xt arg maxx X αNested(x|Dt), where αNested is defined in Equation 14 |
| Open Source Code | Yes | Our code is provided at https://github.com/Alaleh/NMMO. |
| Open Datasets | Yes | Metal-organic Framework Design (d = 7, K = 2) (Kitagawa et al., 2014; Boyd et al., 2019): ... Reinforced Concrete Beam Design (d = 3, K = 2) (Amir & Hasegawa, 1989): ... Four-Bar Truss Design (d = 4, K = 2) (Cheng & Li, 1999): ... Gear Train Design (d = 4, K = 3) (Deb & Srinivasan, 2006; Tanabe & Ishibuchi, 2020): ... Welded Beam Design (d = 4, K = 3) (Coello & Montes, 2002; Rao, 2019): ... Disc Break Design (d = 4, K = 3) (Ray & Liew, 2002; Tanabe & Ishibuchi, 2020): ... We also include the ZDT-3 (d = 9, K = 2) (Zitzler et al., 2000) problem as a synthetic MOO benchmark. ...on various DTLZ benchmark problems (Deb et al., 2005). |
| Dataset Splits | No | The paper does not provide traditional training/test/validation dataset splits as the experimental setup involves sequential acquisition for Bayesian optimization, where the dataset is built during the experiment. It mentions initialization with 'five points randomly generated from Sobol sequences' but not fixed splits of a larger dataset. |
| Hardware Specification | Yes | We utilized an NVIDIA Quadro RTX 6000 GPU with 24,576 Mi B memory capacity. |
| Software Dependencies | No | We use implementations from the Bo Torch Python package (Balandat et al., 2020) for all baselines, including JESMO, EHVI, and HVKG. The paper mentions software but does not specify version numbers for Python or Bo Torch. |
| Experiment Setup | Yes | All experiments were averaged among 15 runs, initialized with five points randomly generated from Sobol sequences, and run for 65 iterations. We model each of the multiple objectives using an independent GP with a Matérn 5/2 ARD kernel. We provide results including NMMO-Joint with lookahead horizon H {4, 8} and BINOM with H = 4. We report the results for NMMO-Nested with H = 2 in the Appendix Section A.5. ... we adjusted the requirement to a smaller set of 4 optimal Pareto sample points. |