Negative Metric Learning for Graphs

Authors: Yiyang Zhao, Chengpei Wu, Lilin Zhang, Ning Yang

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The extensive experiments conducted on widely used benchmarks verify the superiority of the proposed method.
Researcher Affiliation Academia Sichuan University EMAIL, EMAIL
Pseudocode Yes Algorithm 1 Training process of our proposed NML-GCL Input: A graph G, an encoder E, a negative metric network M, the number of training epochs TE for outer minimization, the number of iterations TM for inner minimization; Output: The optimal encoder E; 1: Initialize parameters of E and M; 2: for i = 1, 2, , TE do 3: Randomly generate contrastive views GU and GV from G; 4: Generate node embeddings U and V by E; 5: // Training negative metric network 6: Freeze parameters of E; 7: for j = 1, 2, , TM do 8: Update M according to Equation (7); 9: end for 10: // Training Encoder 11: Freeze parameters of M; 12: Update E according to Equation (7); 13: end for
Open Source Code No The paper does not contain any explicit statement about open-sourcing the code or provide a link to a code repository.
Open Datasets Yes We conduct experiments on six publicly available and widely used benchmark datasets, including three citation networks Cora, Cite Seer, Pub Med [Yang et al., 2016], two Amazon co-purchase networks (Photo, Computers) [Shchur et al., 2018], and one Wikipedia-based network Wiki-CS [Mernyei and Cangea, 2020].
Dataset Splits Yes We follow the public splits on Cora, Cite Seer, and Pub Med, and a 1:1:8 training/validation/testing splits on the other datasets.
Hardware Specification Yes OOM means out of memory on a 24GB GPU.
Software Dependencies No The paper mentions software components like GCN, MLP, and Adam optimizer, but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes In NML-GCL, the encoder E is implemented as a two-layer GCN with embedding dimensionality d = 512, and the NMN is implemented as an MLP with two hidden layers each of which consists 512 neurons. We apply Adam optimizer for all the GCL methods and the classifier. ... The detailed hyperparameter settings are shown in Table ?? in Appendix ??.