Federated Node-Level Clustering Network with Cross-Subgraph Link Mending
Authors: Jingxin Liu, Renda Han, Wenxuan Tu, Haotian Wang, Junlong Wu, Jieren Cheng
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate the superiority of Fed NCN. Extensive experiments on five graph benchmark datasets demonstrate the effectiveness and superiority of the proposed Fed NCN compared to its competitors. |
| Researcher Affiliation | Academia | 1School of Cyberspace Security, Hainan University, Haikou, China 2School of Computer Science and Technology, Hainan University, Haikou, China. Correspondence to: Wenxuan Tu <EMAIL>, Jieren Cheng <EMAIL>. |
| Pseudocode | Yes | Algorithm 1 Training Procedure of Fed NCN Algorithm 2 Fed NCN for Client Algorithm Algorithm 3 Fed NCN for Server Algorithm |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. |
| Open Datasets | Yes | Specifically, we use Cite Seer (Liu et al., 2023a), Pub Med (Jiang et al., 2024), Amazon-Computer, Amazon-Photo (Lin et al., 2021), and Questions (Platonov et al., 2024) as our experimental benchmark datasets. |
| Dataset Splits | Yes | Following the experimental setup from Fed TAD (Zhu et al., 2024), we construct distributed subgraphs by dividing the dataset into 5 clients, 10 clients, and 20 clients, respectively, where each client has a subgraph that is part of a complete graph. |
| Hardware Specification | Yes | All methods are implemented using Py Torch 2.4.0 and a single NVIDIA Ge Force RTX 4090 GPU. |
| Software Dependencies | Yes | All methods are implemented using Py Torch 2.4.0 and a single NVIDIA Ge Force RTX 4090 GPU. |
| Experiment Setup | Yes | We utilize a four-layer GNN on both the client and the server to obtain node embeddings, with hidden layer dimensions are 500-500-2000-10. Moreover, we use a one-layer MLP to obtain the local clustering signals, which are then uploaded to the server. During model optimization, we adopt the Adam optimizer (Xiao et al., 2024) with a learning rate of 1e-3. The client-server interaction is conducted 20 times, with the local model training 10 epochs during each interaction. |