Conjugate Gradients for Kernel Machines

Authors: Simon Bartels, Philipp Hennig

JMLR 2020 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Section 4. Empirical Comparison of Conjugate Gradients and Kernel Machine Conjugate Gradients. This section elaborates the conceptual differences between CG and KMCG and then compares both algorithms with numerical experiments.
Researcher Affiliation Academia Simon Bartels EMAIL Philipp Hennig EMAIL Max Planck Institute for Intelligent Systems and University of T ubingen Maria-von-Linden-Str. 6, T ubingen, GERMANY
Pseudocode Yes Algorithm 1 Conjugate Gradients and Algorithm 2 Kernel Machine Conjugate Gradients
Open Source Code No The paper references a third-party toolbox (GPML toolbox) used in their experiments but does not provide explicit access to the source code for their own methodology (KMCG) described in the paper.
Open Datasets Yes Table 1: descriptions and sources for all data sets considered in this work. [Includes URLs and citations for several datasets, and for some, states 'all files are part of this submission.']
Dataset Splits Yes Each data set has been shuffled and split into two sets, using one for training and the other for testing. For the training set, along each axis G points are equally spaced in [ G/4, G/4] distorted by Gaussian noise N(0, 10 3). One hundred test inputs are uniformly distributed over the [ G/4, G/4] cube.
Hardware Specification Yes All experiments were executed with Matlab R2019a on an Intel i7 CPU with 32 Gigabytes of RAM running Ubuntu 18.04.
Software Dependencies Yes All experiments were executed with Matlab R2019a on an Intel i7 CPU with 32 Gigabytes of RAM running Ubuntu 18.04. This method is part of the GPML toolbox (Rasmussen and Nickisch, 2010)
Experiment Setup Yes For each data set, we optimized the kernel parameters running Carl Rasmussen s minimize function for 100 optimization-steps, where initially all kernel hyper-parameters are set to 1. Run conjugate gradients (Algorithm 1 on p. 10) with x0 := 0, A = k(XM, XM), b = y M and ε := 0.01||b||2.