Beyond Homophily: Graph Contrastive Learning with Macro-Micro Message Passing

Authors: Yiyuan Chen, Donghai Guan, Weiwei Yuan, Tianzi Zang

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments demonstrate that M3P-GCL outperforms both supervised and unsupervised baselines in the node classification task on various datasets with different levels of homophily.
Researcher Affiliation Academia College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics EMAIL
Pseudocode No The paper describes the methods (APS-VE and ASP strategies) in detail within the 'Proposed Method' section and uses mathematical formulas, but it does not present them in a clearly labeled pseudocode block or algorithm section. Figure 3 illustrates the ASP strategy with a diagram, not pseudocode.
Open Source Code No The paper does not contain any explicit statement about making the source code available, nor does it provide a link to a code repository.
Open Datasets Yes To evaluate the performance of different methods, we use seven public real-world datasets with varying levels of homophily: homophilous datasets Cora, Cite Seer, and Pub Med (Yang, Cohen, and Salakhudinov 2016), and non-homophilous datasets Cornell, Texas, Wisconsin, and Actor (Pei et al. 2020).
Dataset Splits Yes We use the fixed splits for the three homophilous datasets as established by Yang, Cohen, and Salakhutdinov (2016), and for the four non-homophilous datasets, we adhere to the splits defined by Pei et al. (2020).
Hardware Specification Yes OOM indicates Out-Of-Memory on a 24GB GPU.
Software Dependencies No We implement our proposed framework and baselines using Py Torch (Ansel et al. 2024) and Py Torch Geometric (Fey and Lenssen 2019) with an Adam optimizer (Kingma and Ba 2014). Specific version numbers for PyTorch and PyTorch Geometric are not provided, only citations to the frameworks themselves.
Experiment Setup Yes The hyperparameters we tune include: (1) learning rate lr {1e 2, 1e 3, 1e 4}, (2) number of attribute nearest neighbors k {5, 10, 30, 50, 70}, (3) the role weighting factor of self-loops ω {0.0, 0.2, 0.4, 0.6, 0.8, 1.0}, (4) global hop g {1, 3, 5, 10, 20, 30} for priority view, and (5) temperature parameter τ {0.2, 0.5, 0.8, 1.0, 1.6, 2.0, 6.0}. We set a patience of 20 epochs and a maximum of 500 epochs for early stopping.