Edge Contrastive Learning: An Augmentation-Free Graph Contrastive Learning Model
Authors: Yujun Li, Hongyuan Zhang, Yuan Yuan
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show that compared with recent state-of-the-art GCL methods or even some supervised GNNs, AFECL achieves SOTA performance on link prediction and semi-supervised node classification of extremely scarce labels. |
| Researcher Affiliation | Collaboration | 1School of Artificial Intelligence, Optics and Electronics (i OPEN), Northwestern Polytechnical University, Xi an 710072, P.R. China 2Institute of Artificial Intelligence (Tele AI), China Telecom, P. R. China |
| Pseudocode | Yes | Algorithm 1: The pseudo-code for the proposed AFECL |
| Open Source Code | Yes | Code https://github.com/Yujun Li361/AFECL |
| Open Datasets | Yes | In our experiments, there are totally eight benchmark datasets of node classification, which have been widely used in previous GCL methods. In the homophilic graph datasets, three citation networks include Cora, Citeseer, and Pubmed (Sen et al. 2008), a co-author network includes Coauthor CS (Shchur et al. 2018) and a co-purchase network includes Amazon-photo (Shchur et al. 2018). For heterophilic graphs, we adopt Actor, Chameleon, and a larger graph, Penn94. |
| Dataset Splits | Yes | In scenarios with extremely limited labels, where the number of training nodes per class c is selected from 1, 2, 3, 4, we conducted experiments following (Shen et al. 2023). Besides, we also followed (Li, Han, and Wu 2018; Li et al. 2019) to refrain from utilizing a validation set with additional labels for model selection. To better verify whether the proposed method works, we conducted experiments with relatively sufficient labels. Specifically, we randomly select 20 training nodes per class. For citation networks, we followed (Yang, Cohen, and Salakhudinov 2016), which selects 500 nodes per class for validation and the rest of the nodes for testing. For a large graph dataset Penn94, we follow (Yang and Mirzasoleiman 2024) to verify the scalability of AFECL. Further detailed introduction can be found in Appendix A. For other graph networks, we followed (Liu, Gao, and Ji 2020), which selects 30 nodes per class for validation and the rest of the nodes for testing. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | Yes | The proposed model was implemented using Py Torch 1.13.1 (Paszke et al. 2019) and Deep Graph Library 1.1.2 (Wang et al. 2019), and trained by the Adam optimizer on all datasets. |
| Experiment Setup | No | The detailed hyperparameters are in Appendix A. A more detailed hyperparameter analysis can be found in Appendix A. |