Regimes of No Gain in Multi-class Active Learning
Authors: Gan Yuan, Yunfan Zhao, Samory Kpotufe
JMLR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The proofs of the main theorems are in Section 4, and a simulation study is presented in Section 5. ... In this section, we demonstrate through a simulation study how non-unique Bayes classes affect the gain in active learning over passive learning. |
| Researcher Affiliation | Academia | Gan Yuan EMAIL Department of Statistics Columbia University in the City of New York New York, NY 10027, USA Yunfan Zhao EMAIL Department of Industrial Engineering and Operations Research Columbia University in the City of New York New York, NY 10027, USA Samory Kpotufe EMAIL Department of Statistics Columbia University in the City of New York New York, NY 10027, USA |
| Pseudocode | Yes | Algorithm 1 Meta Algorithm ... Algorithm 2 Non-adaptive Algorithm |
| Open Source Code | No | The paper does not contain any explicit statements about the release of source code for the methodology described, nor does it provide any links to a code repository. |
| Open Datasets | No | Data Distribution The joint distribution PX,Y is supported on [0, 1] {1, 2, 3}, characterized by the marginal distribution PX Unif(0, 1), and the regression function: ... Here, one can easily verify that the parameter ε0 = PX(η1(X) = η2(X) > η3(X)) is the mass of region where the Bayes classes are non-unique. |
| Dataset Splits | Yes | A test dataset of size 100, 000 is generated and reserved for the evaluation of the classifiers. |
| Hardware Specification | No | The paper does not provide specific hardware details (such as CPU, GPU models, or memory amounts) used for running the experiments described in Section 5. |
| Software Dependencies | No | The paper mentions classifiers and assumes knowledge of smoothness parameters α and λ, but does not specify any software dependencies (e.g., library names with version numbers) used for implementation or simulation. |
| Experiment Setup | Yes | Throughout, we assume that both the active and passive learners know the smoothness parameters α = 1, λ = 15π. ... The classifiers are trained with different sampling budgets under multiple levels of ε0. |