Avoiding Catastrophe in Online Learning by Asking for Help

Authors: Benjamin Plaut, Hanlin Zhu, Stuart Russell

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical The goal of this paper is to understand the conditions under which it is possible to formally guarantee avoidance of catastrophe in online learning. Specifically, we assume that the payoff in each round represents the chance of avoiding catastrophe in that round and try to maximize the product of payoffs (the overall chance of avoiding catastrophe) while allowing a limited number of queries to a mentor. We first show that in general, any algorithm either queries the mentor at a linear rate or is nearly guaranteed to cause catastrophe. However, in settings where the mentor policy class is learnable in the standard online model, we provide an algorithm whose regret and rate of querying the mentor both approach 0 as the time horizon grows.
Researcher Affiliation Academia 1University of California, Berkeley. Correspondence to: Benjamin Plaut <EMAIL>.
Pseudocode Yes Algorithm 1 successfully avoids catastrophe assuming finite VC or Littlestone dimension. (page 5) Algorithm 2 A variant of the Hedge algorithm which only observes losses in response to queries. (page 9) Algorithm 3 extends Algorithm 1 to many actions. (page 11) Algorithm 4 achieves subconstant regret when the mentor s policy has a bounded number of segments. (page 13)
Open Source Code No The paper does not provide any explicit statements about releasing source code or links to a code repository. The closest mention is in the future work section: "We are optimistic about leveraging such techniques to obtain efficient algorithms in our setting."
Open Datasets No The paper is theoretical and defines abstract input spaces for its models, such as "Let X = [0, 1] and Dt = U(X) for each t [T], where U(X) is the uniform distribution on X." It does not use or refer to any publicly available real-world datasets for empirical evaluation.
Dataset Splits No The paper is theoretical and does not involve empirical experiments with specific datasets. Therefore, there is no mention of training, validation, or test dataset splits.
Hardware Specification No The paper presents theoretical work and does not describe any experimental setup that would require specific hardware. No hardware specifications are mentioned.
Software Dependencies No The paper focuses on theoretical analysis and algorithm design without empirical implementation. Consequently, there are no specific software dependencies with version numbers mentioned for reproducing experiments.
Experiment Setup No The paper is theoretical, presenting algorithms and proofs without conducting empirical experiments. As such, there is no discussion of experimental setup details, hyperparameters, or training configurations.