Federated Graph Learning with Graphless Clients
Authors: Xingbo Fu, Song Wang, Yushun Dong, Binchi Zhang, Chen Chen, Jundong Li
TMLR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive experiments over five real-world datasets to verify the superiority of the proposed Fed GLS. In particular, we aim to answer the following questions. RQ1: How does Fed GLS perform compared with other state-of-the-art baselines? RQ2: How well can Fed GLS be stable under different local epochs and various graphless client ratios? |
| Researcher Affiliation | Academia | Xingbo Fu EMAIL Department of Electrical and Computer Engineering University of Virginia |
| Pseudocode | Yes | Algorithm 1 The detailed algorithm of Fed GLS. Input: global parameters θ, ϕ; learning rate α, β, γ, local epoch E Output: θ |
| Open Source Code | Yes | Our implementation of Fed GLS is available in the supplementary materials. |
| Open Datasets | Yes | We synthesize the distributed graph data based on five common real-world datasets, i.e., Cora (Sen et al., 2008), Cite Seer (Sen et al., 2008), Pub Med (Sen et al., 2008), Flickr (Zeng et al., 2020), and ogbn-arxiv (Hu et al., 2020). |
| Dataset Splits | Yes | Following the setting in (Zhang et al., 2021b), we randomly select nodes on each client and let 60% for training, 20% for validation, and the remaining for testing. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. It only mentions general capabilities of client machines (e.g., "clients may not be equipped with powerful machines") which is not a specification of the hardware used for the experiments themselves. |
| Software Dependencies | No | The paper mentions using the Adam optimizer and implementing models like GCN and MLP, but it does not specify any version numbers for these or other key software libraries (e.g., PyTorch, TensorFlow, Python version) that would be needed for replication. |
| Experiment Setup | Yes | The learning rates α and β are set to 0.01 in the GNN model and the feature encoder and γ is set to 0.001 in the graph learner. The temperature τ in the contrastive loss is set to 0.2. The number of local epoch E is set to 5. The number of rounds is set to 100 for Cora and Cite Seer, 200 for Pub Med, 300 for Flickr, and 2,000 for ogbn-arxiv. |