Optimistic Optimization of Gaussian Process Samples

Authors: Julia Grosse, Cheng Zhang, Philipp Hennig

TMLR 2023 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments (Section 6) corroborate that the proposed GP-OO can be significantly more time-efficient than classical BO methods like GP-UCB and EI in settings with low function evaluation costs. Figure 3 shows the simple regret minnrn and the average cumulative regret Rn/n in terms of number of function evaluations n. Figure 4 displays the runtime of GP-UCB, GP-OO and Tur BO.
Researcher Affiliation Collaboration Julia Grosse EMAIL Tübingen AI Center, University of Tübingen, Germany Cheng Zhang EMAIL Microsoft Research, Cambridge, United Kingdom Philipp Hennig EMAIL Tübingen AI Center, University of Tübingen, Germany
Pseudocode Yes Algorithm 1 The optimistic optimization principle. 1: procedure Optimistic optimization(f, δ) 2: priority-queue [(0,1)] //root node 3: for j = 1 to n do 4: Select node (t, i) with maximum U(t,i) = f(x(t,i)) + δ(t) from the priority-queue 5: Observe f(x(t+1,2i 1)) and f(x(t+1,2i)) for the two children of (t, i). 6: Calculate child utilities U(t+1,2i 1) = f(x(t+1,2i 1))+δ(t+1) and U(t+1,2i) = f(x(t+1,2i))+δ(t+1). 7: Add children (t + 1, 2i 1) and (t + 1, 2i) to the priority-queue based on their utilities. 8: end for 9: end procedure
Open Source Code No For Tur BO, we build upon the published code accesible at https://github.com/uber-research/Tu RBO. However, we turn off the optimization of the lengthscale an noise scale and instead provide it with ground-truth values, for a fair comparison with the other methods that also have this information. Also, we changed Tur BO s default batch size from 10 to 2 to make it the same as GP-OOs. Other than that, we use its default settings. For Di Rect, we use the python implementation available at https://github.com/swyoon/Di Rect/blob/master/LICENSE.
Open Datasets Yes Benchmark functions from Surjanovic & Bingham (2022). S. Surjanovic and D. Bingham. Virtual library of simulation experiments: Test functions and datasets. Retrieved May 16, 2022, from http://www.sfu.ca/~ssurjano, 2022.
Dataset Splits Yes For each run, we sampled a sub-domain by choosing uniformly at random new lower and upper boundaries for the intervals along each dimension, such that the new boundaries are inbetween the previous ones and the location of the minimum. In this way, the location of the minimum stays the same as in the original domain.
Hardware Specification Yes All experiments were performed on a desktop machine. The experiments were implemented in Python 3.9.1. and run on a machine with mac OS 12.3.1, a 4 GHz Quad-Core Intel Core i7 CPU and 32 GB RAM or on a machine with mac OS 12.2.1, a 2,7 GHz Dual-Core Intel Core i5 CPU and 16 GB RAM in the case of the Experiment in Section 6.1.
Software Dependencies Yes The experiments were implemented in Python 3.9.1.
Experiment Setup Yes For all experiments, the noise level for GP-UCB is set to a very small constant λ = 0.0005 since we assume noiseless observations. For GP-OO, we use βn = 2 log(2| ˆ Xn|/ϵ) with a discretization of 1/l points along each dimension. For additional heuristical choices of β as well as constant choices of βn {0.1, 1, 10, 100}, see Appendix. For GP-UCB, we use βn = 2 log(2| ˆ X|n2π2/6ϵ) according to theory with a grid search over the size of the discretization | ˆ X| with values in {1, 10, 100, 1000}, as well as constant values β in {1, 10, 100, 1000}. For EI, we did a grid search over the jitter parameter with values in {0.0001, 0.001, 0.01, 0.1, 1}. GP-OO and GP-UCB require the specification of a confidence level ϵ that is set to 0.05 for both methods.