Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]
Multi-fidelity Gaussian Process Bandit Optimisation
Authors: Kirthevasan Kandasamy, Gautam Dasarathy, Junier Oliva, Jeff Schneider, Barnabás Póczos
JAIR 2019 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirically, MF-GP-UCB outperforms such naive strategies and other multi-fidelity methods on several synthetic and real experiments. |
| Researcher Affiliation | Academia | Kirthevasan Kandasamy EMAIL University of California, Berkeley, CA, USA Gautam Dasarathy EMAIL Arizona State University, AZ, USA Junier Oliva EMAIL University of North Carolina at Chapel Hill, NC, USA Jeff Schneider EMAIL Barnab as P oczos EMAIL Carnegie Mellon University, PA, USA |
| Pseudocode | Yes | Algorithm 1 GP-UCB (Srinivas et al., 2010) ... Algorithm 2 MF-GP-UCB |
| Open Source Code | Yes | Our matlab implementation and experiments are available at github.com/kirthevasank/mf-gp-ucb. |
| Open Datasets | Yes | Classification using SVMs (SVM): We trained a Support vector classifier on the magic gamma dataset... Regression using additive kernels (SALSA): ... on the 4-dimensional coal power plant dataset... Viola & Jones face detection (V&J): ... used 3000 images from the Viola and Jones face database... Type Ia Supernovae (Supernova): We use Type Ia supernovae data from Davis et al (2007) |
| Dataset Splits | Yes | Each query to f(m) required 5-fold cross validation on the respective training sets... This experiment used M = 3 and 2000, 4000, 8000 points at each fidelity respectively... the second fidelity used 3000 images from the Viola and Jones face database and the first used just 300. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models or processor types used for experiments. |
| Software Dependencies | No | The paper mentions 'Our matlab implementation' but does not specify a version number for Matlab or any other key software libraries or dependencies used in the implementation. |
| Experiment Setup | Yes | Initialisation: Following recommendations in Brochu, Cora, and de Freitas (2010), all GP methods were initialised with uniform random queries using an initialisation capital Λ0... Kernel: In all our experiments, we used the SE kernel... update the kernel every 25 iterations... Choice of βt: we use βt = 0.2d log(2t)... Maximising ϕt: We used the Di Rect algorithm (Jones et al., 1993)... Choice of ζ(m)s: We initialise ζ to a small value, 1% of the range of initial queries... Choice of γ(m)s: All γ(m) values were initialised to 1% of the range of initial queries. |