EAGLES: Towards Effective, Efficient, and Economical Federated Graph Learning via Unified Sparsification

Authors: Zitong Shi, Guancheng Wan, Wenke Huang, Guibin Zhang, He Li, Carl Yang, Mang Ye

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate its superiority, achieving competitive performance on various datasets, such as reducing training FLOPS by 82% and communication costs by 80% on the ogbn-proteins dataset, while maintaining high performance.
Researcher Affiliation Academia 1National Engineering Research Center for Multimedia Software, School of Computer Science, Wuhan University, Wuhan, China 2National University of Singapore, Singapore 3Department of Computer Science, Emory University, USA. Correspondence to: Mang Ye <EMAIL>.
Pseudocode No The paper describes the methodology with detailed equations and explanations (e.g., in Section 4.2 Consensus-Informed Parameter Sparsification and Section 4.3 Heterogeneity-Aware Graph Sparsification), but it does not contain any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes The code is anonymously available at this link.
Open Datasets Yes Datasets and Split. To comprehensively evaluate EAGLES across different datasets and tasks, we selected Cora (Mammen, 2021), Pubmed (Shchur et al., 2018), and Photo (Mc Auley et al., 2015) from small to medium-scale datasets, and Ogbn-Arxiv, Ogbn-Proteins, and Ogbn-Products from large-scale datasets (Hu et al., 2020).
Dataset Splits Yes For the small to medium group Cora, pubmed and Amz-Photo, we split training, validation and test set as 60%, 20%, 20%. we manually split the data into 60% for training, 20% for validation, and 20% for testing, while for the latter group, we utilized the official dataset splits. The official (training, validation, test) splits for OGBN-Arxiv, OGBN-Proteins, and OGBN-Products are (53.7%, 17.6%, 28.7%), (65.3%, 13.9%, 20.8%), and (8.03%, 4.01%, 87.96%), respectively.
Hardware Specification Yes The experiments are conducted using NVIDIA Ge Force RTX 3090 GPUs as the hardware platform, coupled with an Intel(R) Xeon(R) Gold 6240 CPU @ 2.60GHz.
Software Dependencies Yes The deep learning framework employed is Pytorch, version 2.0.1, alongside CUDA version 11.7.
Experiment Setup Yes Table 5: Detailed hyper-parameter configurations. Dataset #Model #Round #Weight Decay #learning rate #Optimizer Cora GCN 200 2e-4 0.01 Adam pubmed GCN 500 2e-4 0.01 Adam Amz-Photo GCN 800 2e-4 0.01 Adam Ogbn-Arixv Graph SAGE 800 1e-6 0.01 Adam Ogbn-Proteins Deeper GCN 300 5e-6 0.01 Adam Ogbn-Products Graph SAGE 800 1e-6 0.01 Adam