ADPFedGNN: Adaptive Decoupling Personalized Federated Graph Neural Network

Authors: Zeli Guan, Yawen Li, Junping Du, Runqing Tang, Xiaolong Meng

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on three public datasets demonstrate that ADPFed GNN outperforms existing methods, achieving average improvements of 5.66 percent, 5.83 percent, and 12.45 percent in ACC, F1, and Recall, respectively. Section 4 is titled "Experimental Analysis" and details experiments on public datasets, baseline comparisons, and ablation studies.
Researcher Affiliation Academia All authors are affiliated with "Beijing University of Posts and Telecommunications" and "Beijing Key Laboratory of Intelligent Telecommunication Software and Multimedia". The email domains include "@bupt.edu.cn", which are academic institution domains. While "@126.com" is a public email service, its association with university affiliations suggests an academic context for these authors.
Pseudocode Yes Algorithm 1 Training process of ADPFed GNN for a single epoch 1: Input: Local data G, batch size B. 2: Output: θµ, θlog σ2, θgnn, θcls, θq. 3: Initialize parameters θgnn, θcls, θµ, θlog σ2, Mglobal, Mlocal. 4: Client-Side Training: 5: for batch b 0 to B 1 do 6: Compute actions with Equation (2) to filter random neighbors and obtain N(i, ai,t,0) and N(i, ai,t,1); 7: Extract global and local features using Equations (4) and (5); 8: Estimate µ and log σ2 using Equation (7); 9: Compute classification task loss using Equation (12); 10: Update parameters θgnn, θcls; 11: Update parameters θµ, θlog σ2 using the log-likelihood loss in Equation (8); 12: Calculate reward using Equation (3); 13: Save experience (st, at, s t, a t, R(st, at)) in replay buffer; 14: Train Q-network θq on sampled mini-batches from the replay buffer; 15: end for 16: Server-Side Aggregation: 17: Perform federated aggregation for θµ, θlog σ2, θgnn, θcls, θq through Equations (14) and (15).
Open Source Code No The paper does not contain any explicit statement about making the source code publicly available, nor does it provide a link to a code repository.
Open Datasets Yes We conduct experiments on three public graph datasets: Cora [Sen et al., 2008], Cite Seer [Sen et al., 2008], and Pub Med [Namata et al., 2012].
Dataset Splits No The paper describes how datasets are partitioned across clients for federated learning (e.g., Louvain community partitioning and Dirichlet label partitioning) and mentions varying client selection rates, but it does not provide specific training, validation, and test splits for the data within each client or for the overall evaluation in terms of exact percentages or sample counts. It refers to "detailed experimental settings" in the Appendix, which is not provided.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments, such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper mentions using GNNs, Graph Sage, GAT, and DQN as components but does not specify any software libraries or frameworks with their version numbers (e.g., PyTorch, TensorFlow, Python version).
Experiment Setup Yes The DQN component for reinforcement learning is implemented with two hidden layers, each of size 128. A fixed client selection ratio of 0.25 is applied throughout the experiments. The hyperparameter β on model performance peaks around β = 0.5. The optimal performance is achieved when λmi = 0.3 and λreg = 0.003.