Conservative Contextual Bandits: Beyond Linear Representations

Authors: Rohan Deb, Mohammad Ghavamzadeh, Arindam Banerjee

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our algorithms C9Square CB and C9Fast CB and compare the regret bounds with the existing baseline Conservative Linear UCB (C9Lin UCB) (Kazerouni et al., 2017). ... We compare the cumulative regret of the algorithms in Figure 1. Note that C9Square CB and C9Fast CB consistently show a sub-linear trend in regret and beat the existing benchmark, with C9Fast CB performing better in some of the datasets, owing to it s first order order regret guarantee.
Researcher Affiliation Collaboration Rohan Deb University of Illinois Urbana-Champaign EMAIL, Mohammad Ghavamzadeh Amazon AGI EMAIL, Arindam Banerjee University of Illinois Urbana-Champaign EMAIL
Pseudocode Yes Algorithm 1 Conservative Square CB (C9Square CB) Algorithm 2 Conservative Fast CB (C9Fast CB)
Open Source Code No The paper does not contain any explicit statements or links indicating the release of source code for the methodology described.
Open Datasets Yes We consider a series of multiclass classification problems from the openml.org platform.
Dataset Splits No The paper describes a transformation of input features and mentions the use of an evaluation setting for bandit algorithms from previous works (Bietti et al. (2021), Zhou et al. (2020), etc.), but it does not specify explicit training/test/validation dataset splits (e.g., percentages, sample counts, or references to predefined splits within this paper) for the datasets used in its experiments.
Hardware Specification No The paper does not provide any specific details regarding the hardware (e.g., CPU, GPU models, or cloud resources) used for running the experiments.
Software Dependencies No The paper mentions using a "two layer neural network with Re LU activation" and "Online Gradient Descent (OGD)", but does not specify any software names with version numbers (e.g., Python, PyTorch, TensorFlow versions, etc.).
Experiment Setup Yes C9Square CB and C9Fast CB use a two layer neural network with Re LU activation and width 100. We update the network parameter every 10-th round and do a grid search over step sizes {0.01, 0.005, 0.001}. In C9Square CB we set γi = c p t/ log(δ 1) and tune c in {10, 20, 50, 100, 200, 500, 1000}. For C9Fast CB, since the optimal loss L i is not known in advance, the exploration parameter γi is treated as a hyper-parameter in our experiments. We set γi = γ and tune it in {10, 20, 50, 100, 200, 500, 1000}.