Federated Graph-Level Clustering Network
Authors: Jingxin Liu, Jieren Cheng, Renda Han, Wenxuan Tu, Jiaxin Wang, Xin Peng
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments across multiple non-IID graph datasets have demonstrated the effectiveness and superiority of Fed GCN against its competitors. ... Benchmark Datasets ... Baseline Methods ... Implementation Details ... Evaluation Metrics ... Experimental Results |
| Researcher Affiliation | Academia | 1School of Cyberspace Security, Hainan University, Haikou, China 2School of Computer Science and Technology, Hainan University, Haikou, China 3Hainan Blockchain Technology Engineering Research Center, Haikou, China 4School of Computer, National University of Defense Technology, Changsha, China |
| Pseudocode | Yes | Algorithm 1: Federated Graph-Level Clustering Network |
| Open Source Code | No | The paper does not contain an explicit statement about the release of source code, nor does it provide a link to a code repository. |
| Open Datasets | Yes | To verify the effectiveness of our method, we employ 15 benchmark graph datasets across different domains, including Small Molecules (e.g., MUTAG, BZR, COX2, DHFR, PTC MR, AIDS, BZR MD), Bioinformatics (e.g., DD, PROTEINS), Synthetic (e.g., SYNTHETIC), Social Networks (e.g., COLLAB, IMDBMULTI), and Computer Vision (e.g., Letter-high, Letterlow, Letter-med) (Morris et al. 2020). Table 1 summarizes the detailed information of the above datasets. |
| Dataset Splits | No | The paper describes how non-IID settings are created across clients for clustering ( |
| Hardware Specification | Yes | All methods are implemented using Py Torch, and experiments are conducted on a single NVIDIA Ge Force RTX 4090 GPU. |
| Software Dependencies | No | The paper mentions 'All methods are implemented using Py Torch' but does not specify a version number for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | We employ a three-layer GIN (Xu et al. 2018) to obtain the graph-level structure-oriented embedding, with the hidden layer dimension set to 64 and a batch size of 128 for each local model. During model optimization, we use the Adam optimizer (Kingma and Ba 2014) with a learning rate of 1e3. The client interacts with the server 10 times, performing 10 training epochs during each interaction. |