Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]
Federated Multi-view Graph Clustering with Incomplete Attribute Imputation
Authors: Wei Feng, Zeyu Bi, Qianqian Wang, Bo Dong
IJCAI 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate the superiority of our method in Fed MVC tasks with incomplete views. |
| Researcher Affiliation | Academia | 1College of Information Engineering, Northwest A&F University, Yangling, China 2School of Computer Science and Technology, Xi an Jiaotong University, Xi an, China 3School of Telecommunications Engineering, Xidian University, Xi an, China 4School of Continuing Education, Xi an Jiaotong University, Xi an, China EMAIL, EMAIL, EMAIL, EMAIL |
| Pseudocode | Yes | Algorithm 1 FMVC-IAI Input: The incomplete multiview data {X(m)}M m=1 distributed in M clients, the number of clustering clusters K, the number of communication rounds R. Output: Clustering result Y 1: Each client Cm initializes KNN graph. 2: S initializes P, Y, Ag 3: for Not reaching R rounds do 4: Local Training on Cm 5: for Each client Cm in parallel do 6: Generate Z(m) from X(m) with AE 7: Impute features with global graph Ag. 8: Local training with loss in Eq. (8) to obtain H(m). 9: Extract the anchor graph G(m) from H(m). 10: Update E(m) using Y and H(m). 11: Upload E(m) and G(m) to S. 12: end for 13: Global Fusion on S 14: S update P by Eq. (13). 15: S update Y 16: S update Ag using the KNN and G(m). 17: S distribute P, Ag and Y to each client Cm. 18: end for |
| Open Source Code | No | The paper does not provide any information or links regarding open-source code availability for the described methodology. |
| Open Datasets | Yes | We conducted experiments on three multi-view datasets: HW [Winn and Jojic, 2005] comprises multi-feature data for digits 0 through 9, with 200 samples for each class. ... Outdoor Scene [Monadjemi et al., 2002] contains 15 scene categories with both indoor and outdoor environments, 4485 images in total. ... Noisy MNIST [Wang et al., 2015] View 1 consists of the original MNIST images... |
| Dataset Splits | No | Data Setting: Each view in the dataset is distributed across different clients, and clients cannot access each other s data. Additionally, following [Feng et al., 2024], we set the missing rate of η, randomly select ηN missing samples, and delete half of the view data to simulate missing data with different missing rates in various federated scenarios. |
| Hardware Specification | No | The paper does not provide any specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment. |
| Experiment Setup | Yes | We conducted experiments to perform parameter analysis on several key parameters involved in the proposed method. The experimental results are shown in Figure 3 and Figure 4. Specifically, we tested the clustering performance on the HW dataset under a missing rate setting of η = 0.9, varying the following parameters: The number of neighbors k used for KNN graph construction, ranging from 5 to 45 with an interval of 5. The maximum number of communication rounds R, ranging from 1 to 15 with an interval of 1. The clustering loss balance coefficient α [10 4, 10 3, 10 2, 10 1, 1, 10, 100] The number of selected anchors r, ranging from 10 to 1000. |