MASTER: A Multi-granularity Invariant Structure Clustering Scheme for Multi-view Clustering
Authors: Suixue Wang, Shilin Zhang, Qingchen Zhang, Peng Li, Weiliang Huo
IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, extensive experiments on 8 realworld datasets show that MASTER achieves stateof-the-art performance compared to 11 baselines. |
| Researcher Affiliation | Academia | Suixue Wang1 , Shilin Zhang2 , Qingchen Zhang3, , Peng Li4,5, and Weiliang Huo1 1 School of Information and Communication Engineering, Hainan University 2 College of Intelligence and Computing, Tianjin University 3 School of Computer Science and Technology, Hainan University 4 School of Computer Science and Technology, Dalian University of Technology 5 Key Laboratory of Social Computing and Cognitive Intelligence (Dalian University of Technology), Ministry of Education EMAIL, zhang shilin EMAIL, EMAIL |
| Pseudocode | Yes | Algorithm 1 MASTER |
| Open Source Code | No | The paper does not contain any explicit statements about releasing source code, nor does it provide links to any code repositories. |
| Open Datasets | Yes | Benchmark Datasets: Plenty of experiments are conducted on 8 real-world datasets to validate the performance of MASTER. In detail, MNIST-USPS is a handwritten digit dataset that contains 5000 bi-view samples within 10 classes. BDGP is a Drosophila embryo dataset that contains 2500 bi-view samples within 5 classes. CCV is a video dataset with 6773 tri-view samples belonging to 20 classes. Fashion is an image dataset about products, which contains 10000 tri-view samples within 10 classes. Caltech-2V, Caltech-3V, Caltech-4V, and Caltech-5V are created from the Caltech dataset that consists of RGB images with multiple views. The statistics of the 8 datasets are listed in Table 1. |
| Dataset Splits | No | The paper mentions using several datasets (e.g., MNIST-USPS, BDGP, CCV, Fashion-MV, Caltech variations) but does not specify how these datasets were split into training, validation, or test sets. No percentages, counts, or explicit splitting methodologies are provided. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models (e.g., NVIDIA A100), CPU types, or memory amounts used for running the experiments. It only mentions the architecture and optimization process without hardware specifics. |
| Software Dependencies | No | The paper mentions using the "Adam optimizer" but does not specify its version or any other key software components (like programming languages, libraries, or frameworks) with their respective version numbers. |
| Experiment Setup | Yes | MASTER optimizes the overall networks via the Adam optimizer with a learning rate of 0.0003 across all datasets. In the optimization, the number of training epochs is set to 350. The batch size is set to 256. The values of α, β, and K are determined via the ablation experiments with the grid search strategy on each dataset. γ is set to 1 on all the datasets. |