Event-Triggered Time-Varying Bayesian Optimization
Authors: Paul Brunzema, Alexander von Rohr, Friedrich Solowjow, Sebastian Trimpe
TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We derive regret bounds for adaptive resets without exact prior knowledge of the temporal changes and show in numerical experiments that ET-GP-UCB outperforms competing GP-UCB algorithms on both synthetic and real-world data. The results demonstrate that ET-GP-UCB is readily applicable without extensive hyperparameter tuning. |
| Researcher Affiliation | Academia | Paul Brunzema EMAIL Institute for Data Science in Mechanical Engineering RWTH Aachen University Alexander von Rohr EMAIL Institute for Data Science in Mechanical Engineering RWTH Aachen University Friedrich Solowjow EMAIL Institute for Data Science in Mechanical Engineering RWTH Aachen University Sebastian Trimpe EMAIL Institute for Data Science in Mechanical Engineering RWTH Aachen University |
| Pseudocode | Yes | Algorithm 1 Event-triggered GP-UCB (ET-GP-UCB). 1: Define: GP(0, k), X Rd, δB (0, 1), D1 = , lower reset bound N, upper reset bound N, tr = 1 2: for t = 1, 2, . . . , T do 3: Train GP model with the current data set Dt 4: Choose βt (e.g., according to Theorem 1) 5: Select xt = arg maxx X µDt(x) + βtσDt(x) time-invariant posterior using (4) 6: Sample next observation yt = ft(xt) + wt 7: γreset Event Trigger(Dt, (yt, xt), δB) evaluate (10) 8: if (γreset and tr [ N, N]) or tr = N then reset only if within reset window 9: Reset dataset Dt+1 = {(yt, xt)} and set tr = 1 10: else 11: Update dataset Dt+1 = Dt {(yt, xt)} and set tr = tr + 1 |
| Open Source Code | Yes | The code will be published upon acceptance and is also part of the supplementary material of the submission. |
| Open Datasets | Yes | To benchmark the algorithms on real-world data, we use the temperature dataset collected from 46 sensors deployed at Intel Research Berkeley over eight days at 10 minute intervals.3 This dataset was also used as a benchmark in previous work on GP-UCB and TVBO (Srinivas et al., 2010; Krause & Ong, 2011; Bogunovic et al., 2016). 3We thank the researchers at Intel for the publically available data set under: https://db.csail.mit.edu/labdata/labdata.html. |
| Dataset Splits | Yes | For each experiment, two test days are selected, while the preceding days serve as the training set. |
| Hardware Specification | Yes | All experiments in Section 6.2 were conducted on a 2021 Mac Book Pro with an Apple M1 Pro chip and 16GB RAM. The Monte Carlo simulations in Section 5 were conducted over 3 days on a compute cluster. |
| Software Dependencies | No | The paper mentions software like Bo Torch (Balandat et al., 2020) and GPy Torch (Gardner et al., 2018), and being implemented in "Matlab code base" but does not provide specific version numbers for these software components. |
| Experiment Setup | Yes | As in (Bogunovic et al., 2016) and (Srinivas et al., 2010), we utilize a logarithmic scaling βt as βt = O(d ln(t)), where βt = c1 ln(c2t). This approximates βt in Theorem 1 and allows for a direct comparison to Bogunovic et al. (2016), as they suggest to set c1 = 0.8 and c2 = 4 for a suitable exploration-exploitation trade-off. All experiments in this section are conducted using Bo Torch (Balandat et al., 2020) and GPy Torch (Gardner et al., 2018), and full details are in Appendix A. All results show the median and interquartile performance. In the following, we indicate the direction of better performance with upward ( ) and downward ( ) arrows. |