Distributed Inference for Linear Support Vector Machine

Authors: Xiaozhou Wang, Zhuoyi Yang, Xi Chen, Weidong Liu

JMLR 2019 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We provide simulation studies to demonstrate the performance of the proposed MDL estimator. Keywords: Linear support vector machine, distributed inference, Bahadur representation, asymptotic theory
Researcher Affiliation Academia Xiaozhou Wang EMAIL School of Mathematical Sciences Shanghai Jiao Tong University, Shanghai, 200240, China Zhuoyi Yang EMAIL Stern School of Business New York University, New York, NY 10012, USA Xi Chen EMAIL Stern School of Business New York University, New York, NY 10012, USA Weidong Liu EMAIL School of Mathematical Sciences and Mo E Key Lab of Artificial Intelligence Shanghai Jiao Tong University, Shanghai, 200240, China
Pseudocode Yes Algorithm 1 Multi-round distributed linear-type (MDL) estimator for SVM Algorithm 2 Communication-efficient MDL for SVM
Open Source Code No The paper does not contain any explicit statements about the release of source code, nor does it provide links to a code repository or mention code in supplementary materials.
Open Datasets No The data is generated from the following model P(Yi = 1) = p+, P(Yi = 1) = p = 1 p+, Xi = Yi1 + ϵi, ϵi N(0, σ2I), i = 1, 2, ..., n, where 1 is the all-one vector (1, 1, ..., 1)T Rp and the triplets (Yi, Xi, ϵi) are drawn independently.
Dataset Splits No The paper describes generating synthetic data for simulations and using 'batch size m' for distributed processing. However, it does not specify explicit training, validation, or test splits for a fixed dataset, as the data is generated according to a model rather than being a pre-existing dataset that is split.
Hardware Specification No The paper does not provide any specific hardware details such as CPU, GPU models, or memory specifications used for running the simulations or experiments.
Software Dependencies No The paper refers to concepts and methods from other literature (e.g., Cortes and Vapnik (1995) for SVM, Horowitz (1998) for quantile regression, Hestenes and Stiefel (1952) for conjugate gradient method) but does not list any specific software libraries, frameworks, or tools with version numbers used for its implementation.
Experiment Setup Yes The data is generated from the following model P(Yi = 1) = p+, P(Yi = 1) = p = 1 p+, Xi = Yi1 + ϵi, ϵi N(0, σ2I), i = 1, 2, ..., n, where 1 is the all-one vector (1, 1, ..., 1)T Rp and the triplets (Yi, Xi, ϵi) are drawn independently. We set σ = p throughout the simulation study. In order to directly compare the proposed estimator to other estimators, we follow the simulation study setting in Koo et al. (2008) and consider the optimization problem without penalty term, i.e., λ = 0. We set p+ = p = 1/2, i.e., the data is generated from the two classes with equal probability. We use the integral of a kernel function as the smoothing function: 0 if v <= -1, 1/2 + 15/16v - 5/16v^3 + 3/16v^5 if |v| < 1, 1 if v >= 1. The initial estimator eβ0 is computed by directly solving the convex optimization problem (2) with only the samples in the first machine. We consider two settings: the number of samples n = 10^4, dimension p = 4, batch size m {50, 100, 200} and n = 10^6, p = 20, m {500, 1000, 2000}. We set the max number of iterations as 10. The confidence intervals are constructed for v^T_0 eβ with all these three estimators, where v0 = (p + 1)^(-1/2)1_{p+1} and the nominal coverage probability 1 - ρ0 is set to 95%. The constant C0 is selected from {0.5, 1, 2, 5, 10}.