Regret Bounds for Noise-Free Cascaded Kernelized Bandits

Authors: Zihan Li, Jonathan Scarlett

TMLR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental While the goals of this paper are essentially entirely theoretical, we show in Appendix L that (slight variations of) our algorithms can be effective in at least simple experimental scenarios. APPENDIX L: EXPERIMENTAL RESULTS In this appendix, we provide some simple experimental results in order to demonstrate that (slight variations of) our algorithms can be effective in practice.
Researcher Affiliation Academia Zihan Li EMAIL National University of Singapore Jonathan Scarlett EMAIL National University of Singapore
Pseudocode Yes Algorithm 1 GPN-UCB (Gaussian Process Network Upper Confidence Bound) Algorithm 2 Non-Adaptive Sampling Based Method
Open Source Code No The paper does not contain any explicit statements about the release of source code or links to code repositories. It mentions that its goals are theoretical, with simple experimental scenarios shown in Appendix L, but does not provide code for these scenarios.
Open Datasets No The paper does not use any publicly available datasets. For its simple experimental scenarios in Appendix L, it uses a synthetic function network: 'we choose a simple function network g(x) = f (2)(f (1)(x)) where f (1)(x) = sin(x) and f (2)(x) = 0.5x'.
Dataset Splits No The paper does not use any external datasets, therefore, the concept of dataset splits is not applicable. The experimental scenarios described in Appendix L use synthetic functions and queries at uniformly distributed points, not pre-split datasets.
Hardware Specification No The paper does not provide any specific details regarding the hardware used to run its experiments, including CPU, GPU models, or memory specifications.
Software Dependencies No The paper does not provide specific software dependencies with version numbers. While it mentions using a 'squared exponential kernel' and 'GP prior' in Appendix L, it does not specify any libraries or frameworks used with their versions.
Experiment Setup Yes In Appendix L, the paper describes the experimental setup for simple scenarios: 'We query g at 100 points uniformly distributed in [0, 1]. For each f (i), we set the kernel to be the squared exponential kernel with bandwidth = 0.1 and use a constant GP prior with prior mean 0 and prior variance 1.'