$S^2$FGL: Spatial Spectral Federated Graph Learning
Authors: Zihan Tan, Suyuan Huang, Guancheng Wan, Wenke Huang, He Li, Mang Ye
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on multiple datasets demonstrate the superiority of S2FGL. The paper includes a dedicated "5. Experiments" section, which details experimental setup, results, and ablation studies, further supporting its classification as experimental research. |
| Researcher Affiliation | Academia | All authors are affiliated with "National Engineering Research Center for Multimedia Software, School of Computer Science, Wuhan University, Wuhan, China." The email address for the corresponding author, Mang Ye, is "EMAIL," which indicates an academic institution. |
| Pseudocode | No | The paper describes the methodology in narrative text and framework illustrations (e.g., Figure 3: Framework Illustration) but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code is available at https://github. com/Wonder7racer/S2FGL.git |
| Open Datasets | Yes | We conducted experiments on various datasets to validate the superiority of our framework S2FGL. The homophilic graph datasets include Cora, Citeseer, and Pubmed, while the heterophilic graph datasets comprise Texas, Wisconsin, and Minesweeper. The following provides a description of each dataset. Cora (Mc Callum et al., 2000) dataset... Citeseer (Giles et al., 1998) dataset... Pubmed (Sen et al., 2008) dataset... Texas and Wisconsin datasets are subsets of the Web KB dataset (Craven et al., 1998)... Minesweeper (Baranovskiy et al., 2023) dataset... |
| Dataset Splits | Yes | For each dataset, we divide the nodes into training, validation, and testing sets with ratios of 60%, 20%, and 20%, respectively. |
| Hardware Specification | No | The supercomputing system at the Supercomputing Center of Wuhan University supported the numerical calculations in this paper. This statement is too general and does not provide specific hardware details like GPU/CPU models, processor types, or memory amounts. |
| Software Dependencies | No | The paper mentions using the ACM-GCN (Luan et al., 2022) as the base model but does not specify any other software dependencies with version numbers, such as Python, PyTorch, or other libraries used for implementation. |
| Experiment Setup | Yes | L = LCE + λ1LFKD + λ2LFGMA, (13) where LCE denotes the standard cross-entropy loss for node classification, while λ1 and λ2 are balancing hyperparameters for the proposed methods NLIR and FGMA. ... For NLIR, we test hyperparameter sets of 100, 50, 10, and 1. For FGMA, the settings are 0.01, 0.05, 0.5, and 1. ... We conduct each experiment five times and report the average accuracy from the last five communication epochs as the final performance. ... We simulate various collaborative scenarios by configuring the number of clients to 10 for Cora, Citeseer, Pubmed, and Minesweeper datasets, and 3 for the Texas and Wisconsin datasets. |