On the Estimation of Derivatives Using Plug-in Kernel Ridge Regression Estimators

Authors: Zejian Liu, Meng Li

JMLR 2023 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our simulation studies show favorable finite sample performance of the proposed method relative to several existing methods and corroborate the theoretical findings on its minimax optimality. In this section, we assess the finite sample performance of the plug-in KRR estimator relative to several methods and provide numerical evidence of its agreement with the minimax optimal rate.
Researcher Affiliation Academia Zejian Liu EMAIL Department of Statistics Rice University Houston, TX 77005, USA Meng Li EMAIL Department of Statistics Rice University Houston, TX 77005, USA
Pseudocode No The paper describes the proposed method analytically and discusses its practical considerations and performance through simulation studies. It does not contain any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statement about releasing source code for the methodology described, nor does it include a link to a code repository. It mentions using 'R code to implement Low LSR' provided by 'Wen Wu Wang', but this refers to a third-party method, not the authors' own implementation code.
Open Datasets No The paper describes simulation studies where data is generated for the experiments: 'We generate the response y following Model (1) by adding Gaussian error εi N(0, 0.22) to f01 and f02.' This indicates that no external, publicly available dataset was used for the primary experiments.
Dataset Splits Yes We conduct a Monte Carlo study with 100 repetitions. We evaluate each estimator except Low LSR at 100 equally spaced points in [0, 1]... The sample size ni varies from 10 to 500 such that log(ni) s are 100 equally spaced points in [log(10), log(500)]. We replicate the simulation 100 times for each sample size ni.
Hardware Specification Yes The average total running time of the proposed method is 0.31 when n = 100 and 0.97 seconds when n = 500 in R on a PC with 2.3 GHz 8-Core Intel Core i9 CPU.
Software Dependencies No The paper mentions 'R package locpol in Cabrera (2018))', 'R package pspline in Ripley (2017))', and 'optim function in R'. While specific R packages are named along with their publication years, explicit version numbers for R itself or the specific packages (e.g., locpol version X.Y, pspline version A.B) are not provided.
Experiment Setup Yes For the proposed method, we use the second-order Sobolev kernel and Mat ern kernel... we estimate error variance σ2 by its maximum marginal likelihood estimator (MMLE) ... and choose the regularization parameter λ by maximizing the marginal likelihood. For local polynomial regression, we use the Gaussian kernel and select the bandwidth via cross validation. For smoothing spline, we use cubic penalized smoothing spline with other parameters set to the default values. When implementing Low LSR, we set the number of difference quotients k to 50 for the first derivative and increase it to 100 for the second derivative.