Multi-view Clustering via Multi-granularity Ensemble
Authors: Jie Yang, Wei Chen, Feng Liu, Peng Zhou, Zhongli Wang, Xinyan Liang, Bingbing Jiang
IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments demonstrate that MGE consistently outperforms state-of-the-art methods across multiple datasets, validating its effectiveness and superiority in handling heterogeneous views. In this section, we present the experimental studies of the proposed MGE on a synthetic data and six real-world datasets |
| Researcher Affiliation | Academia | 1University of Technology Sydney, NSW, Australia 2The University of Sydney, NSW, Australia 3The University of Melbourne, VIC, Australia 4Anhui University, Hefei, China 5Hangzhou Normal University, Hangzhou, China 6Shanxi University, Taiyuan, China |
| Pseudocode | Yes | Algorithm 1 MGE framework Input: Multi-view data X = {X1, , Xv}, and the cluster number c, and the granularity control parameter λ; 1: for each view Xi (including the fused view Xf); 2: Apply CHC to Xi to generate the multi-granularity label set Li by Eqs. (1)-(4); 3: end for 4: Construct the cluster-wise similarity graph using all label sets Li from all views by Eq. (5); 5: Construct the transition probability matrix P by Eq. (6); 6: Propagate cluster-wise similarities by Eq. (7); 7: Compute the cluster-wise similarity matrix Z by Eq. (8); 8: Construct the co-association matrix B by Eq. (9); 9: Apply CHC to B to obtain the final clustering label set L by Eqs. (2) and (3); Output: The clustering result L. |
| Open Source Code | No | The paper does not provide any explicit statement or link for open-source code availability for the described methodology. |
| Open Datasets | Yes | In this section, we present the experimental studies of the proposed MGE on a synthetic data and six real-world datasets, in which three views of the synthetic data are shown in Figures 3 (a)-(c), and the detailed information of real-world multi-view datasets are reported in Table 1. ... Dataset Classes Data size Feature size 100Leaves 100 1600 192(64/64/64) UCI 10 2000 356(76/216/64) COIL20 20 1440 11078(1024/3304/6750) Handwritten 10 2000 316(76/240) CMU-PIE 68 2856 90(30/30/30) ORL 40 400 1689(512/59/864/254) |
| Dataset Splits | No | The paper uses various datasets (listed in Table 1) for experiments but does not explicitly describe the training/test/validation splits or other dataset partitioning strategies. |
| Hardware Specification | No | The paper discusses computational complexity and runtime performance in Section 2.3 and Figure 4, but it does not specify the particular hardware (e.g., CPU, GPU models) used for conducting the experiments. |
| Software Dependencies | No | The paper describes the proposed MGE framework and its components but does not provide specific software dependencies or version numbers used for implementation. |
| Experiment Setup | Yes | The MGE framework contains only one hyperparameter λ, which controls the granularity of the clustering labels generated for all views, including the fused view. Figure 7 illustrates the ACC scores of MGE on three datasets when adjusting λ within the range of [0.1:0.1:0.9]. It can be observed that setting λ around 0.5 achieves the best average performance across the three datasets. |