Graph Embedded Contrastive Learning for Multi-View Clustering

Authors: Hongqing He, Jie Xu, Guoqiu Wen, Yazhou Ren, Na Zhao, Xiaofeng Zhu

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, extensive experiments demonstrate that our method achieves superior performance on both MVC and MVGC tasks. ... 3 Experiments 3.1 Experimental setup In this subsection, we briefly present the datasets, comparison methods, and evaluation protocol. ... Evaluation. We leverage three metrics for comprehensive evaluation, i.e., clustering accuracy (ACC), normalized mutual information (NMI), adjusted rand index (ARI), and report the mean results with standard deviation of 10 runs.
Researcher Affiliation Academia Hongqing He1,3 , Jie Xu2, , Guoqiu Wen1,3, , Yazhou Ren4 , Na Zhao2 , Xiaofeng Zhu4,5 1Key Lab of Education Blockchain and Intelligent Technology, Ministry of Education, Guangxi Normal University, Guilin 541004, China 2Singapore University of Technology and Design, Singapore 487372, Singapore 3Guangxi Key Lab of Multi-source Information Mining & Security, Guilin 541004, China 4School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China 5Hainan University, Haikou 570228, China Corresponding authors: Jie Xu (EMAIL) and Guoqiu Wen (EMAIL). All listed affiliations are academic institutions (universities or public research labs).
Pseudocode No The paper describes the method using equations and figures (e.g., Figure 1 for the GMVC framework and equations for objective functions), but no explicit 'Pseudocode' or 'Algorithm' block is provided.
Open Source Code Yes Corresponding authors: Jie Xu (EMAIL) and Guoqiu Wen (EMAIL). Code is available at https://github.com/Submissions In/GMVC.
Open Datasets Yes Datasets. We conduct experiments on 8 public benchmarks, including 4 multi-view datasets, i.e., DHA [Lin et al., 2012], NGs [Hussain et al., 2010], Web KB [Sun et al., 2007], Caltech [Fei-Fei et al., 2004], and 4 multi-graph datasets, i.e., ACM [Jin et al., 2021], IMDB [Jin et al., 2021], Texas [Ling et al., 2023], Chameleon [Ling et al., 2023].
Dataset Splits No The main text states: "More information can be found in the supplementary materials of our GMVC code." and "report the mean results with standard deviation of 10 runs." However, it does not provide specific percentages, sample counts, or explicit methodology for training/validation/test splits within the main paper.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not explicitly state specific software dependencies with version numbers, such as Python versions, deep learning framework versions (e.g., PyTorch, TensorFlow), or other library versions used for implementation.
Experiment Setup Yes In our GMVC framework, we employ the non-negative parameters n and γ to control the GGC loss as shown in Eq. (16), where n selects the top n elements in each row of A to identify the additional positive sample pairs, and γ balances the contrastive losses between the original and additional positive sample pairs. ... In our experiments, we set n = 4 for all multi-view datasets and set n = 6 for all multi-graph datasets. ... In our experiments, γ is set within the range of [10 3, 10 1]. ... Evaluation. We leverage three metrics for comprehensive evaluation, i.e., clustering accuracy (ACC), normalized mutual information (NMI), adjusted rand index (ARI), and report the mean results with standard deviation of 10 runs.