COKE: Communication-Censored Decentralized Kernel Learning

Authors: Ping Xu, Yue Wang, Xiang Chen, Zhi Tian

JMLR 2021 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Comprehensive tests on both synthetic and real datasets are conducted to verify the communication efficiency and learning effectiveness of COKE.1
Researcher Affiliation Academia Ping Xu EMAIL Yue Wang EMAIL Xiang Chen EMAIL Zhi Tian EMAIL Department of Electrical and Computer Engineering, George Mason University Fairfax, VA 22030, USA
Pseudocode Yes Algorithm 1 DKLA Run at Agent i
Open Source Code No The paper does not provide explicit information about the availability of open-source code for the methodology described.
Open Datasets Yes To further evaluate our algorithms, the following popular real-world datasets from UCI machine learning repository are chosen (Asuncion and Newman, 2007).
Dataset Splits Yes each agent uses 70% of its data for training and the rest for testing.
Hardware Specification No The paper does not provide specific details about the hardware used to run its experiments.
Software Dependencies No The paper does not specify any software dependencies with version numbers.
Experiment Setup Yes The censoring thresholds are h(k) = 0.95k, the regularization parameter λ and stepsize ρ of DKLA and COKE are set to be 5 10 5 and 10 2, respectively. The stepsize of CTA is set to be η = 0.99