Rethinking Federated Graph Learning: A Data Condensation Perspective
Authors: Hao Zhang, Xunkai Li, Yinlin Zhu, Lianglin Hu
IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on six public datasets consistently demonstrate the superiority of Fed GM over state-of-the-art baselines, highlighting its potential for a novel FGL paradigm. |
| Researcher Affiliation | Academia | 1 Computer Network Information Center, Chinese Academy of Sciences, Beijing 2 University of Chinese Academy of Sciences, Beijing, China 3 Beijing Institute of Technology, Beijing, China 4 Sun Yat-sen University, Guangzhou, China EMAIL, EMAIL, EMAIL, EMAIL |
| Pseudocode | Yes | Algorithm 1 Fed GM-Condensed Graph Optimization Input: Rounds, T; Local real subgraphs, {Gk}K k=1; Initial condensed graph, Sglo Output: Optimized condensed graph, S glo /* Client Execution */ |
| Open Source Code | No | The paper does not explicitly state that source code for the methodology is provided, nor does it include a link to a code repository. |
| Open Datasets | Yes | We evaluate Fed GM on six public benchmark graph datasets across five domains, including two citation networks (Cora, Citeseer) [Kipf and Welling, 2016a], one co-authorship network (CS) [Shchur et al., 2018], one co-purchase network (Amazon Photo), one task interaction network (Tolokers) [Platonov et al., 2023], and one social network (Actor) [Tang et al., 2009]. |
| Dataset Splits | No | The paper mentions employing the Louvain algorithm to partition graphs across 10 clients for simulation, but it does not provide specific training/validation/test splits (e.g., percentages or sample counts) for the datasets used in the experiments for model evaluation. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts) used for running the experiments. |
| Software Dependencies | No | The paper mentions using a '2-layer GCN' and the 'Optuna framework [Akiba et al., 2019]', but it does not specify version numbers for these or other software dependencies. |
| Experiment Setup | Yes | For conventional framework, we employ a 2-layer GCN [Kipf and Welling, 2016b] with 256 hidden units as the backbone for both the clients and the central server. The local training epoch is set to 3. ... In the Fed GM framework, the local subgraph condensation model, gradient generation model, and the model employed for evaluation are all implemented as 2-layer GCNs with 256 hidden units, and the condensed graph structure generation model is implemented as a 3-layer MLP with 128 hidden units. In the first stage, the number of local condensation epochs is 1000. ... For all methods, the learning rate for the GNN is set to 1e-2, the weight decay is set to 5e-4, and the dropout rate is set to 0.0. The federated training is conducted over 100 rounds. For each experiment, we report the mean and variance results of 3 standardized training runs. |