Optimizing Posterior Samples for Bayesian Optimization via Rootfinding
Authors: Taiwo Adebiyi, Bach Do, Ruda Zhang
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We give empirical results across a diverse set of problems with input dimensions ranging from 2 to 16, establishing the effectiveness of our optimization strategy. Although our algorithm is proposed for the inner-loop optimization of posterior samples, perhaps surprisingly, we see significant improvement in outer-loop optimization performance, which often allows acquisition functions based on posterior samples to converge faster than other common acquisition functions. We assess the performance of TS-roots in optimizing benchmark functions. We then compare the quality of solutions to the inner-loop optimization of GP-TS acquisition functions obtained from our proposed method, a gradient-based multistart optimizer with uniformly random starting points, and a genetic algorithm. |
| Researcher Affiliation | Academia | Taiwo A. Adebiyi University of Houston EMAIL Bach Do University of Houston EMAIL Ruda Zhang University of Houston EMAIL |
| Pseudocode | Yes | Algorithm 1 TS-roots: Optimizing posterior samples via rootfinding Algorithm 2 maxk sum: Combinations with the k largest sums Algorithm 3 roots: Univariate global rootfinding on an interval Algorithm 4 minsort: Best local minima of a separable function Algorithm 5 Decoupled sampling of Gaussian process posterior |
| Open Source Code | Yes | Our implementation of the TS-roots algorithm is available at https://github.com/UQUH/TSRoots. |
| Open Datasets | Yes | We test the empirical performance of TS-roots on challenging minimization problems of five benchmark functions: the 2D Schwefel, 4D Rosenbrock, 10D Levy, 16D Ackley, and 16D Powell functions (Surjanovic & Bingham, 2013). The analytical expressions for these functions and their global minimum are given in Appendix F. |
| Dataset Splits | No | The input observations are randomly generated using the Latin hypercube sampling (Owen, 1992) within [ -1, 1]d, where d represents the number of input variables. The number of initial observations is 10d for all problems. |
| Hardware Specification | Yes | We carry out all experiments, except those for inner-loop optimization, using a designated cluster at our host institution. This cluster hosts 9984 Intel CPU cores and 327680 Nvidia GPU cores integrated within 188 compute and 20 GPU nodes. The inner-loop optimization is implemented on a PC with an Intel Core TM i7-1165G7 @ 2.80 GHz and 16 GB memory. |
| Software Dependencies | No | For the univariate global rootfinding via Chebyshev polynomials, we use MATLAB s Chebfun package (Battles & Trefethen, 2004) and its corresponding implementation in Python, called chebpy (Richardson, 2016). |
| Experiment Setup | Yes | We set no = 5000, ne = nx = 1000, and α = 3 (buffer coefficient). The number of initial observations is 10d for all problems. The standard deviation of observation noise σn = 10 6 is applied for standardized output observations. The number of BO iterations for the 2D Schwefel and 4D Rosenbrock functions is 200, while that for the 10D Levy, 16D Ackley, and 16D Powell functions is 800. |