Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]

Boosted Kernel Ridge Regression: Optimal Learning Rates and Early Stopping

Authors: Shao-Bo Lin, Yunwen Lei, Ding-Xuan Zhou

JMLR 2019 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we report experimental results to study the behavior of BKRR and the adaptive stopping rule (16) in practice. We consider two regression problems. For the j-th regression problem (j = 1, 2), we assume that training examples are independently drawn from the regression model yi = gj(xi) + ͕͕i, i = 1, . . . , |D|, where {xi}|D| i=1 are drawn from the uniform distribution on the (hyper)-cube [0, 1]dj (dj is the input dimension) and {͕͕i}|D| i=1 are noise components independently drawn from the Gaussian distribution N(0, 1/5). For the j-th problem, we build the estimator by applying BKRR in the RKHS induced by a Mercer kernel Kj.
Researcher Affiliation Academia Shao-Bo Lin EMAIL Department of Mathematics Wenzhou University Wenzhou, China Yunwen Lei EMAIL Department of Computer Science and Engineering Southern University of Science and Technology Shenzhen, China Ding-Xuan Zhou EMAIL School of Data Science and Department of Mathematics City University of Hong Kong Kowloon, Hong Kong, China
Pseudocode No The paper describes algorithms in text and mathematical formulas (e.g., equations 2 and 3 define the BKRR estimator iteratively), but does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not explicitly state that source code for the described methodology is publicly available, nor does it provide any links to a code repository.
Open Datasets No For the j-th regression problem (j = 1, 2), we assume that training examples are independently drawn from the regression model yi = gj(xi) + ͕͕i, i = 1, . . . , |D|, where {xi}|D| i=1 are drawn from the uniform distribution on the (hyper)-cube [0, 1]dj (dj is the input dimension) and {͕͕i}|D| i=1 are noise components independently drawn from the Gaussian distribution N(0, 1/5).
Dataset Splits Yes We record the iteration number ˆk ASR selected by the adaptive stopping rule (ASR) (19) with ͕ = 0.05, the iteration number ˆk CV selected by the ve-fold cross validation (CV) and the iteration number ˆk Oracle with the minimal generalization error over all candidate models.
Hardware Specification No The paper does not provide specific details about the hardware used for running its experiments.
Software Dependencies No The paper does not provide specific details about ancillary software dependencies, such as library names with version numbers.
Experiment Setup Yes In this simulation, We traverse the regularization parameter ͕ over the set 0.0002 {1, 2, 22, . . . , 210}. For each regularization parameter, we run BKRR until k reaches 150 for f͕ = g1 and 300 for f͕ = g2, respectively. ... We x regularization parameters ͕ {0.0032, 0.0128, 0.0512, 0.2048}, and show in Figure 2 EGEs versus the iteration number for two regression problems. ... We apply BKRR to regression problems with dierent sample sizes (|D| {800, 1200, 1600, 2000, 2400, 2800, 3200, 3600, 4000}) and dierent regularization parameters (͕ {0.016, 0.032, 0.064, 0.128}). For each sample size and regularization parameter, we run BKRR with several iterations to get a sequence of candidate models. We record the iteration number ˆk ASR selected by the adaptive stopping rule (ASR) (19) with ͕ = 0.05