Parallelizing Spectrally Regularized Kernel Algorithms

Authors: Nicole Mücke, Gilles Blanchard

JMLR 2018 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section we numerically study the error in HK-norm, corresponding to s = 0 in Corollary 5 (in expectation with p = 2) both in the single machine and distributed learning setting. Our main interest is to study the upper bound for our theoretical exponent α, parametrizing the size of subsamples in terms of the total sample size, m = nα, in different smoothness regimes. In addition we shall demonstrate in which way parallelization serves as a form of regularization.
Researcher Affiliation Academia Nicole Mücke EMAIL Institute of Stochastics and Applications, University of Stuttgart Pfaffenwaldring 57 70569 Stuttgart, Germany Gilles Blanchard EMAIL Institute of Mathematics, University of Potsdam Karl-Liebknecht-Strae 24-25 14476 Potsdam, Germany
Pseudocode No The paper does not contain any sections explicitly labeled 'Pseudocode' or 'Algorithm', nor does it present any structured code-like blocks describing a procedure.
Open Source Code No The paper does not explicitly state that source code for the described methodology is being released, nor does it provide a link to a code repository.
Open Datasets No For all experiments in this section, we simulate data from the regression model Yi = fρ(Xi) + ϵi , i = 1, . . . , n , where the input variables Xi Unif[0, 1] are uniformly distributed and the noise variables εi N(0, σ2) are normally distributed with standard deviation σ = 0.005.
Dataset Splits No The numerical studies in Section 4 describe simulating data and partitioning it into 'm disjoint subsamples' for distributed learning, but do not provide explicit details about traditional training, validation, and testing dataset splits for evaluating model generalization.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used to conduct the numerical studies.
Software Dependencies No The paper does not specify any software dependencies, such as libraries or programming languages, along with their version numbers.
Experiment Setup Yes For all experiments in this section, we simulate data from the regression model ... where the input variables Xi Unif[0, 1] are uniformly distributed and the noise variables εi N(0, σ2) are normally distributed with standard deviation σ = 0.005. ... We consider sample sizes from 500, . . . , 9000. ... In the model assessment step, we partition the dataset into m nα subsamples, for any α {0, 0.05, 0.1, . . . , 0.85}. We execute each simulation M = 30 times.