The Teaching Dimension of Linear Learners
Authors: Ji Liu, Xiaojin Zhu
JMLR 2016 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | This paper presents the first known teaching dimension for ridge regression, support vector machines, and logistic regression. We also exhibit optimal training sets that match these teaching dimensions. Our approach generalizes to other linear learners. Our analysis technique involves a novel application of the Karush-Kuhn-Tucker (KKT) conditions. We will also present the corresponding minimum teaching set construction in section 3. The paper primarily focuses on theoretical derivations, theorems, proofs, and the construction of teaching sets based on mathematical principles rather than empirical evaluation on datasets. |
| Researcher Affiliation | Academia | Ji Liu EMAIL Department of Computer Science University of Rochester Rochester, NY 14627, USA. Xiaojin Zhu EMAIL Department of Computer Sciences University of Wisconsin-Madison Madison, WI 53706, USA. |
| Pseudocode | No | The paper describes mathematical derivations, theorems, and proofs, but it does not contain any structured pseudocode or algorithm blocks. The methods are presented through textual descriptions and mathematical formulas. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. There are no specific repository links, explicit statements about code release, or mentions of code being available in supplementary materials. |
| Open Datasets | No | The paper is theoretical and focuses on mathematical derivations of teaching dimensions for learning algorithms. It does not involve any empirical studies or experiments with datasets, therefore, no information regarding publicly available or open datasets is provided. |
| Dataset Splits | No | The paper is theoretical and does not perform experiments using datasets. Consequently, there is no information provided regarding dataset splits for training, validation, or testing. |
| Hardware Specification | No | The paper is theoretical and does not describe any experiments that would require specific hardware. Therefore, no hardware specification details (e.g., GPU models, CPU types, memory amounts) are mentioned. |
| Software Dependencies | No | The paper is theoretical and does not involve an experimental setup that would require specific software dependencies with version numbers. No such details are provided for replication. |
| Experiment Setup | No | The paper is theoretical and does not report on experimental results. Therefore, no specific experimental setup details such as hyperparameter values, training configurations, or system-level settings are provided. |