Federated Incomplete Multi-view Clustering with Globally Fused Graph Guidance
Authors: Guoqing Chao, Zhenghao Zhang, Lei Meng, Jie Wen, Dianhui Chu
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental results demonstrate the effectiveness and superiority of FIMCFG. Our code is publicly available at https://github.com/Paddi Hunter/FIMCFG. |
| Researcher Affiliation | Academia | 1School of Computer Science and Technology, Harbin Institute of Technology, Weihai, China 2School of Software, Shandong University, Jinan, China 3Shenzhen Key Laboratory of Visual Object Detection and Recognition, Harbin Institute of Technology, Shenzhen, China. Correspondence to: Guoqing Chao <EMAIL>. |
| Pseudocode | Yes | Algorithm 1 Optimization algorithm for FIMCFG |
| Open Source Code | Yes | Our code is publicly available at https://github.com/Paddi Hunter/FIMCFG. |
| Open Datasets | Yes | Our experiments were conducted on four widely used multi-view datasets. Specifically, Scene-15 (Lazebnik et al., 2006; Fei-Fei & Perona, 2005) consists of 4485 scene images classified into 15 classes, with each sample represented by three views. Hand Written (HW)1 contains 2000 samples in ten numeric categories, each consisting of six views. Landuse21 (Yang & Newsam, 2010) consists of 2100 satellite images in 21 categories, 100 images per category, represented by three views. 100leaves2 consists of 1600 image samples of 100 plants, each represented by three different views. ... 1https://archive.ics.uci.edu/dataset/72/multiple+features 2https://archive.ics.uci.edu/ml/datasets/Onehundred+plant+species+leaves+data+set |
| Dataset Splits | No | The paper describes how incomplete datasets are constructed and the missing rates (e.g., δ = 0.5, ranging from 0.1 to 0.7), and how heterogeneity is introduced (Dirichlet distribution). However, it does not explicitly provide training, validation, or test dataset splits. For clustering tasks, evaluation is often performed on the entire dataset, meaning traditional train/test/validation splits as seen in supervised learning are not explicitly mentioned. |
| Hardware Specification | No | The paper does not explicitly mention the hardware specifications (e.g., GPU/CPU models, memory) used to conduct the experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers (e.g., Python 3.x, PyTorch 1.x, CUDA x.x) needed to replicate the experiment. |
| Experiment Setup | Yes | During the training of clients, the total loss function defined by Eq. (10) has two hyperparameters γ1 and γ2 to trade-off the graph reconstruction loss and content reconstruction loss. We conducted the experiments with various settings of the two hyperparameters ranging from 10 3 to 103 at δ = 0.5, as shown in Figure 3. ... Based on the experimental results, we recommend setting γ1 to 1 and γ2 to 0.1 for optimal performance. |