Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]
Margin-Independent Online Multiclass Learning via Convex Geometry
Authors: Guru Guruganesh, Allen Liu, Jon Schneider, Joshua Wang
NeurIPS 2021 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We prove the following result: Theorem 1 (Restatement of Corollary 3.1). There exists an efficient algorithm for learning a linear classifier that incurs a total loss of at most O(d log d). We complement this result by showing that learning general convex sets requires an almost linear loss per query. Our results build off of regret guarantees for the geometric problem of contextual search. |
| Researcher Affiliation | Collaboration | Guru Guruganesh Google Research EMAIL Allen Liu MIT EMAIL Jon Schneider Google Research EMAIL Joshua Wang Google Research EMAIL |
| Pseudocode | No | No pseudocode or algorithm blocks are presented in the provided paper text. |
| Open Source Code | No | The paper's checklist explicitly states "N/A" for including code or instructions to reproduce experimental results. |
| Open Datasets | No | The paper is theoretical and does not describe or use any datasets for training. The checklist marks data-related questions as N/A. |
| Dataset Splits | No | The paper is theoretical and does not perform experiments that would require dataset splits for validation. The checklist marks data-related questions as N/A. |
| Hardware Specification | No | The paper is theoretical and does not describe any experimental hardware specifications. The checklist marks hardware-related questions as N/A. |
| Software Dependencies | No | The paper is theoretical and does not specify software dependencies with version numbers for experimental reproducibility. The checklist marks software-related questions as N/A. |
| Experiment Setup | No | The paper is theoretical and does not describe any experimental setup details such as hyperparameters or training configurations. The checklist marks experiment-related questions as N/A. |