Sparse Kernel Regression with Coefficient-based $\ell_q-$regularization

Authors: Lei Shi, Xiaolin Huang, Yunlong Feng, Johan A.K. Suykens

JMLR 2019 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical The contribution of this paper is two-fold. First, we derive a tight bound on the ℓ2 empirical covering numbers of the related function space involved in the error analysis. Based on this result, we obtain the convergence rates for the ℓ1 regularized kernel regression which is the best so far. Second, for the case 0 < q < 1, we show that the regularization parameter plays a role as a trade-offbetween sparsity and convergence rates. ... We shall establish a rigorous mathematical analysis on the asymptotic behavior of the algorithm under the framework of learning theory. ... We aim to fill this gap by developing an elegant theoretical analysis on the asymptotic performances of estimators ˆfq satisfying Assumption 1...
Researcher Affiliation Academia Lei Shi EMAIL Shanghai Key Laboratory for Contemporary Applied Mathematics School of Mathematical Sciences, Fudan University Shanghai, P. R. China Xiaolin Huang EMAIL Institute of Image Processing and Pattern Recognition Institute of Medical Robotics, Shanghai Jiao Tong University MOE Key Laboratory of System Control and Information Processing Shanghai, P. R. China Yunlong Feng EMAIL Department of Mathematics and Statistics, State University of New York at Albany New York, USA Johan A.K. Suykens EMAIL Department of Electrical Engineering, ESAT-STADIUS, KU Leuven Kasteelpark Arenberg 10, Leuven, B-3001, Belgium
Pseudocode No The paper describes iterative minimization processes and functions like Ψη,q, but does not present them in a clearly labeled 'Pseudocode' or 'Algorithm' block. For example, it defines a sequence as 'cn+1 = Ψλγ,q cn + λKT (y Kcn)' in equation (13), but this is presented within the main text as a mathematical formula rather than a structured algorithm block.
Open Source Code No The paper does not provide any explicit statements about releasing source code, nor does it include links to code repositories or mention code in supplementary materials.
Open Datasets No The paper focuses on theoretical analysis of kernel regression and does not conduct experiments on specific, named datasets. It refers to 'a set of observations z = {(xi, yi)}m i=1 Zm which is assumed to be drawn independently according to ρ', indicating a theoretical data model rather than empirical dataset usage.
Dataset Splits No The paper is theoretical and does not perform experiments using specific datasets, therefore, it does not provide information regarding training/test/validation dataset splits.
Hardware Specification No The paper is theoretical and focuses on mathematical analysis of sparse kernel regression. It does not describe any experimental setup that would require specific hardware, thus no hardware specifications are mentioned.
Software Dependencies No The paper is theoretical, presenting mathematical analysis and proofs. It does not detail any experimental implementation or require specific software dependencies with version numbers.
Experiment Setup No The paper is theoretical and focuses on mathematical analysis rather than empirical experiments. While it defines parameters like regularization parameter γ and step size λ within its theoretical framework, it does not describe a concrete experimental setup with specific hyperparameter values or training configurations.