Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]
A Trichotomy for Transductive Online Learning
Authors: Steve Hanneke, Shay Moran, Jonathan Shafer
NeurIPS 2023 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | Qualitatively, we prove a trichotomy, stating that the minimal number of mistakes made by the learner as n grows can take only one of precisely three possible values: n, Ī (log(n)), or Ī(1). Furthermore, this behavior is determined by a combination of the VC dimension and the Littlestone dimension. Quantitatively, we show a variety of bounds relating the number of mistakes to well-known combinatorial dimensions. In particular, we improve the known lower bound on the constant in the Ī(1) case from ⦠pā¦(log(d)) where d is the Littlestone dimension. Finally, we extend our results to cover multiclass classification and the agnostic setting. |
| Researcher Affiliation | Academia | Steve Hanneke Department of Computer Science Purdue University EMAIL Shay Moran Faculty of Mathematics, Faculty of Computer Science, and Faculty of Data and Decision Sciences Technion Israel Institute of Technology EMAIL Jonathan Shafer Computer Science Division UC Berkeley EMAIL |
| Pseudocode | Yes | Algorithm 1: An adversary that forces ā¦(log(LD(H))) mistakes. |
| Open Source Code | No | The paper focuses on theoretical contributions (theorems, proofs, bounds) and does not describe a software methodology or provide any statements or links regarding the availability of source code for its contributions. |
| Open Datasets | No | The paper is theoretical and does not involve experiments with datasets. Therefore, it does not mention specific public datasets or their access information for training purposes. |
| Dataset Splits | No | The paper is theoretical and does not report on experiments with data, thus no train/validation/test splits are mentioned. |
| Hardware Specification | No | The paper is theoretical and does not report on computational experiments. Therefore, no hardware specifications are provided. |
| Software Dependencies | No | The paper is theoretical and does not report on computational experiments that would require specific software dependencies with version numbers. |
| Experiment Setup | No | The paper is theoretical and does not report on computational experiments. Therefore, no experimental setup details, such as hyperparameters or training configurations, are provided. |