GHOST: Generalizable One-Shot Federated Graph Learning with Proxy-Based Topology Knowledge Retention

Authors: Jiaru Qian, Guancheng Wan, Wenke Huang, Guibin Zhang, Yuxin Wu, Bo Du, Mang Ye

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we comprehensively evaluate our proposed GHOST by addressing the following key questions... We perform experiments on node classification tasks in various scenarios to validate the superiority of our framework. Datasets. To effectively evaluate the performance of our approach, we employed seven benchmark graph datasets of various scales and features... Table 1. Comparison with the state-of-the-art methods on seven real-world datasets. We report node classification accuracies (%) for downstream task performance.
Researcher Affiliation Academia 1National Engineering Research Center for Multimedia Software, School of Computer Science, Wuhan University, Wuhan, China 2National University of Singapore, Singapore 3Renmin University of China. Correspondence to: Mang Ye <EMAIL>.
Pseudocode Yes Furthermore, we provide the detailed description of our framework in Algorithm 1.
Open Source Code Yes The code is available at https: //github.com/Jiaru Qian/GHOST .
Open Datasets Yes To effectively evaluate the performance of our approach, we employed seven benchmark graph datasets of various scales and features, including Cora (Mc Callum et al., 2000), Cite Seer (Giles et al., 1998), Pub Med (Canese & Weis, 2013), Chameleon (Pei et al., 2020), Amazon Photo , Coauthor-CS (Shchur et al., 2018) and Ogbn-Arxiv (Hu et al., 2020).
Dataset Splits Yes For all datasets, we use a common split of 20%/40%/40% for training/validation/testing sets.
Hardware Specification Yes The experiments are conducted using NVIDIA Ge Force RTX 3090 GPUs as the hardware platform, coupled with Intel(R) Xeon(R) CPU E5-2678 v3 @ 2.50GHz.
Software Dependencies Yes The deep learning framework employed was Pytorch, version 2.3.1, alongside CUDA version 12.1.
Experiment Setup Yes We adopt a two-layer GCN as the backbone, with the hidden layer size as 128... Adaptive Moment Estimation (Adam) (Kingma, 2014) was chosen, featuring a learning rate of 5e 3 and a weight decay of 4e 4. For the alignment phase of each proxy model of 10 clients, we set the local training epoch TL to 100. As for hyperparameters, λd and λf are determined through a grid search within {0.01, 0.05, 0.1, 0.5} and {0.1, 0.2, 0.5, 1} respectively... At the server side, we set M = 3 , TG = 5 and adopt Adam as the optimizer for the global model with a learning rate of 1e 2 and a weight decay of 4e 4. As for λr and λn, we conduct a grid search within {0.1, 0.5, 1} respectively.