Robust Graph Contrastive Learning for Incomplete Multi-view Clustering
Authors: Deyin Zhuang, Jian Dai, Xingfeng Li, Xi Wu, Yuan Sun, Zhenwen Ren
IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on the six multi-view datasets demonstrate that our RGCL exhibits superiority and effectiveness compared with 9 state-of-the-art IMVC methods. The source code is available at https://github.com/DYZ163/RGCL.git. |
| Researcher Affiliation | Academia | Deyin Zhuang1 , Jian Dai2 , Xingfeng Li1 , Xi Wu1 , Yuan Sun3,4 , Zhenwen Ren1 1Southwest University of Science and Technology, China 2Southwest Automation Research Institute, China 3 College of Computer Science, Sichuan University, China 4 National Key Laboratory of Fundamental Algorithms and Models for Engineering Numerical Simulation, Sichuan University, China EMAIL, EMAIL |
| Pseudocode | No | The paper describes the methodology in detail in Section 3, including Multi-view Reconstruction, Noise-robust Graph Contrastive Learning, Cross-view Graph-level Alignment, and Implementation, but it does not present any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The source code is available at https://github.com/DYZ163/RGCL.git. |
| Open Datasets | Yes | In this section, we evaluate the performance of the proposed method on the six multi-view datasets, including Hand Written [Le Cun et al., 1989], COIL20 [Nene et al., 1996], BDGP [Cai et al., 2012], Land Use-21 [Yang and Newsam, 2010], ALOI-100 [Geusebroek et al., 2005], and AWA [Romera Paredes and Torr, 2015]. |
| Dataset Splits | No | To evaluate the performance for incomplete multi-view data, we randomly set the instances with a certain ratio (i.e., [0.1, 0.3, 0.5, 0.7]) as the missing pairs. This describes the method for introducing incompleteness, but not the explicit training/test/validation splits for the datasets themselves. |
| Hardware Specification | Yes | For all experiments, we employ a Linux platform equipped with an NVIDIA RTX 4090 GPU and 32GB of memory |
| Software Dependencies | Yes | using Py Torch version 2.3.0. |
| Experiment Setup | Yes | To be specific, the view-specific encoder and decoder layers are configured with dimensions of (0.8dv, 0.8dv, 1500, C) and (C, 1500, 0.8dv, 0.8dv, dv), respectively. ... We set the temperature parameters to σ = 0.1 and θ = 0.05. ... the optimal value of λ,α and β, i.e. λ = 0.5, α=0.005 or 0.01, and β=0.005 or 0.01. |