Modeling Inter-Intra Heterogeneity for Graph Federated Learning
Authors: Wentao Yu, Shuo Chen, Yongxin Tong, Tianlong Gu, Chen Gong
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on six homophilic and five heterophilic graph datasets in both non-overlapping and overlapping settings demonstrate the effectiveness of our method when compared with nine state-of-the-art methods. Specifically, Fed IIH averagely outperforms the second-best method by a large margin of 5.79% on all heterophilic datasets. To validate the effectiveness of our proposed Fed IIH, we perform extensive experiments on eleven widely used benchmark datasets, including both homophilic and heterophilic graphs. |
| Researcher Affiliation | Academia | 1School of Computer Science and Engineering, Nanjing University of Science and Technology, China 2Center for Advanced Intelligence Project, RIKEN, Japan 3State Key Laboratory of Complex & Critical Software Environment, Beihang University, China 4Engineering Research Center of Trustworthy AI (Ministry of Education), Jinan University, China 5Department of Automation, Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, China EMAIL, EMAIL, EMAIL, EMAIL, EMAIL |
| Pseudocode | No | The paper describes the methodology using equations and textual explanations, including a graphical model (Figure 2) and a framework comparison (Figure 1), but does not contain explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code https://github.com/blgpb/Fed IIH |
| Open Datasets | Yes | Extensive experiments on eleven datasets demonstrate the effectiveness of our proposed Fed IIH, where our method averagely outperforms the second-best method by a large margin of 5.79% on the heterophilic graph data. We perform extensive experiments on eleven widely used benchmark datasets, including both homophilic and heterophilic graphs. Homophilic datasets: Cora, CiteSeer, Pub Med, Amazon-Computer, Amazon-Photo, ogbn-arxiv. Heterophilic datasets: Roman-empire, Amazon-ratings, Minesweeper, Tolokers, Questions. |
| Dataset Splits | No | The paper mentions 'We use both the non-overlapping and overlapping subgraph partitioning settings.' and presents results for '5 Clients', '10 Clients', '20 Clients', '30 Clients', '50 Clients'. This describes how the data is distributed across clients but does not specify the train/test/validation split ratios or methodology for the node classification task within each subgraph. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware (e.g., GPU models, CPU types, memory details) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, CUDA versions) needed to replicate the experiment. |
| Experiment Setup | No | The paper mentions 'τ denotes a hyperparameter for scaling the similarity score' and discusses 'K latent factors'. However, it does not provide concrete values for these or other crucial hyperparameters (e.g., learning rate, batch size, number of epochs, optimizer settings) used in the main experiments, which are necessary to reproduce the experimental setup. |