Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]
Sparse Representer Theorems for Learning in Reproducing Kernel Banach Spaces
Authors: Rui Wang, Yuesheng Xu, Mingsong Yan
JMLR 2024 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | The goal of this paper is to understand what kind of RKBSs can promote sparsity for learning solutions. We consider two typical learning models in an RKBS: the minimum norm interpolation (MNI) problem and the regularization problem. We first establish an explicit representer theorem for solutions of these problems, which represents the extreme points of the solution set by a linear combination of the extreme points of the subdifferential set, of the norm function, which is data-dependent. We then propose sufficient conditions on the RKBS that can transform the explicit representation of the solutions to a sparse kernel representation having fewer terms than the number of the observed data. Under the proposed sufficient conditions, we investigate the role of the regularization parameter on sparsity of the regularized solutions. We further show that two specific RKBSs, the sequence space ℓ1(N) and the measure space, can have sparse representer theorems for both MNI and regularization models. |
| Researcher Affiliation | Academia | Rui Wang EMAIL School of Mathematics Jilin University Changchun, 130012, P. R. China Yuesheng Xu EMAIL Mingsong Yan EMAIL Department of Mathematics and Statistics Old Dominion University Norfolk, VA 23529, USA |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. The content primarily consists of mathematical derivations, theorems, and proofs. |
| Open Source Code | No | The paper does not provide any concrete access information for source code. There are no links to repositories, explicit statements about code release, or mentions of code in supplementary materials. |
| Open Datasets | No | The paper is theoretical and does not conduct experiments on datasets, thus no information about public datasets or access to them is provided. |
| Dataset Splits | No | The paper is theoretical and does not involve dataset experiments, therefore, no information on dataset splits (e.g., training/test/validation) is provided. |
| Hardware Specification | No | The paper does not describe any experimental setup or computations that would require specific hardware. Therefore, no hardware specifications are mentioned. |
| Software Dependencies | No | The paper does not describe any computational experiments or software implementations with version numbers. Therefore, no software dependencies are listed. |
| Experiment Setup | No | The paper is theoretical and does not include an experimental section with specific setup details, hyperparameters, or training configurations. |