Strategy-Architecture Synergy: A Multi-View Graph Contrastive Paradigm for Consistent Representations

Authors: Shuman Zhuang, Zhihao Wu, Yuhong Chen, Zihan Fang, Jiali Yin, Ximeng Liu

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on downstream tasks, including node classification and clustering, validate the superiority of our proposed model.
Researcher Affiliation Academia 1College of Computer and Data Science, Fuzhou University, Fuzhou, China 2College of Computer Science and Technology, Zhejiang University, Hangzhou, China 3Institute of Artificial Intelligence, Xiamen University, Xiamen, China
Pseudocode Yes The algorithm procedure of CAMEL is given in Appendix A.
Open Source Code No No explicit statement or link for open-source code release is provided in the paper.
Open Datasets Yes To assess the performance of CAMEL, we conduct experiments on three types of datasets, including four multi-relational datasets (ACM, DBLP, IMDB, YELP), two multi-attribute datasets (COIL20, Noisy MNIST), and two multi-modality datasets (Iaprtc12, NUS-Wide).
Dataset Splits Yes Classification results for four multi-relational graphs, with a training ratio of 20%, are detailed in Table 1. We run six multi-view classification methods with a training rate of 10%, and the results are presented in Table 2.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup No The paper states that 'The implementation details and parameter settings are introduced in Appendix D.' and discusses parameter sensitivity in Section 4.5, but concrete hyperparameter values or training configurations are not explicitly provided in the main text.