Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]
Gaussian process regression: Optimality, robustness, and relationship with kernel ridge regression
Authors: Wenjia Wang, Bing-Yi Jing
JMLR 2022 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we conduct numerical experiments to study whether the convergence rates given by Theorems 6 and 8 are accurate. |
| Researcher Affiliation | Academia | Wenjia Wang EMAIL The Hong Kong University of Science and Technology (Guangzhou) Guangzhou, China and The Hong Kong University of Science and Technology Clear Water Bay, Kowloon, Hong Kong Bing-Yi Jing EMAIL Department of Statistics and Data Science Southern University of Science and Technology Shenzhen, China |
| Pseudocode | No | The paper describes algorithms and methods in textual form and through mathematical equations, but it does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any explicit statements about making source code available, nor does it provide links to a code repository. |
| Open Datasets | No | The numerical experiments section describes how data is simulated: "For each k, we simulate 100 realizations of a Gaussian process, where the correlation function is a Mat ern correlation function given by (8)." It does not use or provide access to any publicly available datasets. |
| Dataset Splits | No | The paper describes simulating data for numerical experiments: "We consider the sample sizes n = 10k, for k = 2, 3, ..., 15. For each k, we simulate 100 realizations of a Gaussian process... We take µ = 0.1 n m/m0+1 when m0 m, and take µ = 0.1 when m0 > m." This describes data generation and parameters, but not dataset splits for pre-existing datasets. |
| Hardware Specification | No | The paper does not specify any particular hardware used for conducting the numerical experiments. |
| Software Dependencies | No | The paper does not mention any specific software dependencies with version numbers. |
| Experiment Setup | Yes | We take µ = 0.1 n m/m0+1 when m0 m, and take µ = 0.1 when m0 > m. The noise is set to be normal with mean zero and variance 0.25. For i-th realization of a Gaussian process, we generate 10k grid points as X, and use Ei = 1/200 P200 j=1(Z(xj) ˆf G(xj))2 to approximate Z ˆf G 2 L2(Ω), where xj s are the first 200 points of the Halton sequence (Niederreiter, 1992). |