Graph Personalized Federated Learning via Client Network Learning

Authors: Jiachen Zhou, Han Xie, Carl Yang

TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on three real-world graph datasets demonstrate the consistent effectiveness of our two major proposed modules, which also mutually verify the effectiveness of each other.
Researcher Affiliation Academia Jiachen Zhou EMAIL Department of Computer Science Columbia University Xie Han EMAIL Department of Computer Science Emory University Carl Yang EMAIL Department of Computer Science Emory University
Pseudocode Yes The detailed pseudo-code is presented in Algorithm 1 Algorithm 1 Graph Personalized Federated Learning
Open Source Code Yes All codes and data can be found in https://github.com/Jiachen2cc/Graph-Personalized-Federated-Learning.
Open Datasets Yes We utilize the three most widely used graph classification benchmark datasets from two domains Morris et al. (2020), including two molecules datasets (NCI1, Yeast) and a bioinformatics dataset (PROTEINS).
Dataset Splits Yes We design label heterogeneity settings following a practical data split mechanism Wang et al. (2020); Lee et al. (2021); Luo et al. (2021), which is controlled by the Dirichlet distribution Dir(α). The setting becomes more heterogeneous as the value of α decreases. We consider α = 0.5, 1, 5 to represent strong heterogeneity, moderate heterogeneity, and weak heterogeneity in real-world scenarios, respectively. These settings are combined with varying levels of data scarcity, represented by client numbers k = 15, 20, and 25, yielding a total of nine distinct combinations. All experiments are run with a five-fold cross-validation for three repetitions under fixed random seed 0.
Hardware Specification No The paper does not explicitly mention any specific hardware (e.g., GPU models, CPU types) used for running the experiments.
Software Dependencies No Local training uses a batch size of 128, the Adam Kingma & Ba (2014) optimizer with the learning rate of 1e 3 and the weight decay of 5e 4. All FL methods are trained for 200 communication rounds with 1 local epoch in each communication round.
Experiment Setup Yes We utilize three-layer GINs with a hidden size of 64 as local models. Local training uses a batch size of 128, the Adam Kingma & Ba (2014) optimizer with the learning rate of 1e 3 and the weight decay of 5e 4. All FL methods are trained for 200 communication rounds with 1 local epoch in each communication round. For GPFL, we generate 20 random graphs with 30 nodes to compute functional embedding. The graph learner is trained for 100 epochs during each communication round. And the hyperparameters γ in Eq 4 and β in Eq 8 are set to 0.95, 0.95 across all settings, respectively.