No-Regret Bayesian Optimization with Unknown Hyperparameters

Authors: Felix Berkenkamp, Angela P. Schoellig, Andreas Krause

JMLR 2019 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our method on several benchmark problems. Keywords: Bayesian optimization, Unknown hyperparameters, Reproducing kernel Hilbert space (RKHS), Bandits, No regret
Researcher Affiliation Academia Felix Berkenkamp EMAIL Department of Computer Science ETH Zurich Zurich, Switzerland Angela P. Schoellig EMAIL Institute for Aerospace Studies University of Toronto Toronto, Canada Andreas Krause EMAIL Department of Computer Science ETH Zurich Zurich, Switzerland
Pseudocode Yes Algorithm 1 Adaptive GP-UCB(A-GP-UCB)
Open Source Code No The paper does not provide any explicit statement or link indicating the availability of open-source code for the described methodology.
Open Datasets Yes Lastly, we use our method to tune a logistic regression problem on the MNIST data set (Le Cun, 1998).
Dataset Splits No The paper mentions using the MNIST dataset and tuning four training inputs for a logistic regression problem, but it does not provide specific details about how the dataset was split into training, validation, or test sets (e.g., percentages, sample counts, or predefined splits).
Hardware Specification No The paper does not contain any specific details about the hardware (e.g., CPU, GPU models, memory, or specific computing clusters) used to perform the experiments.
Software Dependencies No The paper discusses Gaussian processes and other machine learning concepts and methods but does not provide specific version numbers for any software libraries, programming languages, or tools used for implementation.
Experiment Setup Yes Unless otherwise specified, the initial lengthscales are set to θ0 = 1, the initial norm bound is B0 = 2, the confidence bounds hold with probability at least δ = 0.9, and the tradeofffactor between b(t) and g(t) is λ = 0.1.