Federated Graph Condensation with Information Bottleneck Principles
Authors: Bo Yan, Sihao He, Cheng Yang, Shang Liu, Yang Cao, Chuan Shi
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive experiments on five real-world datasets and show that Fed GC outperforms centralized GC and FGL methods, especially in large-scale datasets. Meanwhile, Fed GC can consistently protect membership privacy during the whole federated training process. |
| Researcher Affiliation | Academia | 1Beijing University of Posts and Telecommunications 2Institute of Science Tokyo 3China University of Mining and Technology EMAIL, EMAIL, EMAIL |
| Pseudocode | No | The paper describes methods using mathematical formulations and prose, but no explicit pseudocode or algorithm blocks are provided. |
| Open Source Code | No | The paper does not contain an explicit statement regarding the availability of source code, nor does it provide a link to a code repository. |
| Open Datasets | Yes | Datasets. Follow (Jin et al. 2022b; Zheng et al. 2023), we evaluate Fed GC on five graph datasets on node classification task, including Cora, Citeseer, Ogbn-arxiv, Flickr, and Reddit. |
| Dataset Splits | Yes | We adopt the public splits provided in (Jin et al. 2022b). |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU models, CPU types, or cloud instance specifications) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details, such as library or solver names with version numbers. |
| Experiment Setup | Yes | Following (Jin et al. 2022b), we report results under different condensation ratio r. Following (Yao et al. 2023), we test all FGL methods under the non-i.i.d. depicted by Dirichlet distribution (β=1). we set the client number n=10 for small datasets Cora and Citesser, and n=5 for large-scale datasets Ogbn-arxiv, Flickr, and Reddit. We run 5 times and report the average and variance of results. We utilize accuracy (Acc) to evaluate the condensation performance and AUC score to measure MIA performance. |