Universal Graph Self-Contrastive Learning

Authors: Liang Yang, Yukun Cai, Hui Ning, Jiaming Zhuo, Di Jin, Ziyi Ma, Yuanfang Guo, Chuan Wang, Zhen Wang

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, to begin with, the proposed framework GRASS is validated by empirically evaluating its performances on the node classification task. Next, an in-depth understanding of the efficacy of this framework is provided through several experiment analyses. ... The comparison of accuracy between GRASS and the baselines on six homophilic graphs is shown in Table 2.
Researcher Affiliation Academia Liang Yang1 , Yukun Cai1 , Hui Ning1 , Jiaming Zhuo1 , Di Jin2 , Ziyi Ma1 , Yuanfang Guo3 , Chuan Wang4 and Zhen Wang5 1Hebei Province Key Laboratory of Big Data Calculation, School of Artificial Intelligence, Hebei University of Technology, Tianjin, China 2College of Intelligence and Computing, Tianjin University, Tianjin, China 3School of Computer Science and Engineering, Beihang University, Beijing, China 4School of Computer Science and Technology, Beijing Jiao Tong University, Beijing, China 5School of Artificial Intelligence, OPtics and Electro Nics (i OPEN), School of Cybersecurity, Northwestern Polytechnical University, Xi an, China EMAIL, EMAIL, EMAIL, EMAIL, EMAIL, EMAIL, EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes the methodology using mathematical equations and textual explanations (Section 4: Methodology) but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any statement about open-sourcing the code or provide a link to a code repository.
Open Datasets Yes Experiments are conducted on twelve widely used benchmark datasets with various homophily. The homophilic graph datasets include Cora, Cite Seer, Pub Med, Wiki-CS, Amazon-Computers (abbreviated as Computers), and Amazon-Photo (abbreviated as Photo). The heterophilic graph datasets include Chameleon, Squirrel, Actor, Cornell, Texas, and Wisconsin. The statistics of these datasets are summarised in Table 1. Cora, Cite Seer, and Pub Med [Sen et al., 2008] are three citation network datasets... Wiki-CS [Mernyei and Cangea, 2020] is a hyperlink network... Computers and Photo [Shchur et al., 2018] are co-purchase networks... Chameleon and Squirrel [Pei et al., 2020] are two Wikipedia networks... Actor [Pei et al., 2020] is an actor co-occurrence network... Cornell, Texas, and Wisconsin [Pei et al., 2020] are networks of web pages...
Dataset Splits Yes For homophilic graphs, all nodes are randomly divided into three parts, 10% nodes for training, 10% nodes for validation and the remaining 80% nodes for testing. The performance on heterophilic graph datasets is evaluated on the commonly used 48%/32%/20% training/validation/testing.
Hardware Specification Yes The experiments are performed on Nvidia Ge Force RTX 3090 (24GB) GPU cards.
Software Dependencies No The paper mentions using the Adam optimizer but does not specify any software names or versions for libraries, frameworks, or programming languages.
Experiment Setup Yes The training epoch is 200 with full batch training. For hyperparameter settings, the learning rates are tuned in the range {0.1, 0.05, 0.01, 0.005, 0.001}. Besides, the weight decay is tuned from {0.0, 0.001, 0.005, 0.01, 0.1}. Finally, the representation dimension is tuned in the range {256, 512, 1024, 2048, 4096}. For hyperparameter settings, the learning rates are tuned in the range {0.1, 0.05, 0.01, 0.005, 0.001}.