On Safety in Safe Bayesian Optimization

Authors: Christian Fiedler, Johanna Menn, Lukas Kreisköther, Sebastian Trimpe

TMLR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show empirically that this algorithm is not only safe, but also outperforms the state-of-the-art on several function classes. (...) First, we demonstrate empirically that a simple heuristic like setting βt ≈ 2 can lead to a significant proportion of bound violations. (...) These experiments illustrate that using heuristics in Safe Opt can be highly problematic.
Researcher Affiliation Academia Christian Fiedler EMAIL Institute for Data Science in Mechanical Engineering (DSME) RWTH Aachen University Johanna Menn EMAIL Institute for Data Science in Mechanical Engineering (DSME) RWTH Aachen University Lukas Kreisköther Sebastian Trimpe EMAIL Institute for Data Science in Mechanical Engineering (DSME) RWTH Aachen University
Pseudocode Yes A formal description of the algorithm using pseudocode is provided by Algorithm 1.
Open Source Code Yes Additional background, discussion and experimental details can be found in the appendix, and the source code for all experiments is available in a Git Hub repository.2 (...) 2https://github.com/Data-Science-in-Mechanical-Engineering/LoSBO
Open Datasets No As test functions, we use the standard benchmarks Camelback (2d) and Hartmann (6d). Similar to (Kirschner et al., 2019a), we also use a Gaussian function f(x) = exp ( −4 x 2 2) in ten dimensions (10d) as a benchmark.
Dataset Splits No For the first experiment, in order to generate the data sets, 100 inputs are uniformly sampled from [0, 1], the corresponding RKHS function is evaluated on these inputs, and finally i.i.d. normal noise with variance 0.01 is added to the function values.
Hardware Specification No Computations were performed with computing resources granted by RWTH Aachen University under project rwth1459.
Software Dependencies No For the numerical experiments described in the next section, we have chosen Bo Torch (Balandat et al., 2020), which allows an easy parallel implementation of the acquisition function optimization.
Experiment Setup Yes For simplicity, independent additive noise, uniformly sampled from [ −Bϵ, Bϵ], is used in all of the following experiments. As is well-known, bounded random variables are subgaussian, and we can set R = Bϵ in Real-β-Safe Opt. Additionally, we choose δ = 0.01 and the true RKHS norm as the RKHS norm upper bound in Real-β-Safe Opt, unless noted otherwise. We further set the nominal noise variance equal to R in both Lo SBO and Real-β-Safe Opt. Following the discussion in Section B, we choose E = 2Bϵ in Lo SBO. Finally, we must specify a strategy to compute βt in Lo SBO. Recall from Section 6.2 that these scaling factors are now proper tuning parameters. In all of the following experiments, we use β = 2 in Lo SBO, as this is a common choice in the literature on Safe Opt and GP-UCB. Choosing such a simple rule also simplifies the experimental evaluation, as no additional tuning parameters or further algorithmic choices are introduced. Unless noted otherwise, in all of the following experiments Bϵ = 0.01 is used.