Hierarchical Federated Learning with Multi-Timescale Gradient Correction
Authors: Wenzhi Fang, Dong-Jun Han, Evan Chen, Shiqiang Wang, Christopher Brinton
NeurIPS 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through extensive experiments on various datasets and models, we validate the effectiveness of MTGC in diverse HFL settings. |
| Researcher Affiliation | Collaboration | Wenzhi Fang Purdue University EMAIL Dong-Jun Han Yonsei University EMAIL Evan Chen Purdue University EMAIL Shiqiang Wang IBM Research EMAIL Christopher G. Brinton Purdue University EMAIL |
| Pseudocode | Yes | Algorithm 1: HFL with Multi-Timescale Gradient Correction (MTGC) |
| Open Source Code | Yes | The code for this project is available at https://github.com/wenzhifang/MTGC. |
| Open Datasets | Yes | In our experiments, we consider four widely used datasets: EMNIST-Letters (EMNIST-L) [7], Fashion-MNIST [53], CIFAR-10 [23], and CIFAR-100 [23]. |
| Dataset Splits | Yes | The CINIC-10 dataset contains 90,000 training images, 90,000 validation images, and 90,000 test images, significantly larger than CIFAR-10 and CIFAR-100 with 60,000 images. |
| Hardware Specification | Yes | We conduct the experiments based on a cluster of 3 NVIDIA A100 GPUs with 40 GB memory. |
| Software Dependencies | No | The paper mentions "Our code is based on the framework of [1]" but does not specify particular software dependencies with version numbers (e.g., Python version, specific library versions like PyTorch, TensorFlow, etc.). |
| Experiment Setup | Yes | Across all algorithms considered, we maintain a consistent learning rate η = 0.1 and batch size 50. |