N2GON: Neural Networks for Graph-of-Net with Position Awareness
Authors: Yejiang Wang, Yuhai Zhao, Zhengkui Wang, Wen Shan, Ling Li, Qian Li, Miaomiao Huang, Meixia Wang, Shirui Pan, Xingwei Wang
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we conduct an extensive experimental evaluation of N2GON, examining its performance across a wide range of datasets from various domains... The results show that our model significantly outperforms SOTA baselines. [...] Table 1 displays the node classification results comparing our N2GON algorithm with various established graph representation learning algorithms. [...] Ablation Study. |
| Researcher Affiliation | Academia | 1School of Computer Science and Engineering, Northeastern University, China 2Key Laboratory of Intelligent Computing in Medical Image of Ministry of Education, Northeastern University, China 3Info Comm Technology Cluster, Singapore Institute of Technology, SIT X NVIDIA AI Centre, Singapore 4Singapore University of Social Sciences, Singapore 5Shanxi University, China 6Shandong University, China 7Griffith University, Australia. |
| Pseudocode | Yes | Algorithm 1 N2GON Input :GON GN. F1, F2: the backbone encoders. while not converge do Sample full-batch of node-graphs {Gi}N i=1 from GN; Encode {Gi}N i=1 by F1 Eq.(2) to get {h Gi}N i=1; Update {h Gi}N i=1 by F2 Eq.(3) to get {v Gi}N i=1; Generate constraint net C Eq.(4) and Π Eq.(6); Calculate constrain loss Lcon Eq.(7) and NLL loss, optimize the encoders F1, F2; end Output :The well trained model F1, F2 |
| Open Source Code | No | The paper does not provide an explicit statement about the release of its own source code, nor does it include a link to a code repository for the methodology described. |
| Open Datasets | Yes | To evaluate the effectiveness of N2GON across a spectrum of datasets, we conducted a comprehensive analysis using 9 benchmark network datasets. [...] three standard homogeneous citation datasets, as mentioned in (Kipf & Welling, 2017), as well as six well-known heterogeneous datasets, referenced in (Pei et al., 2020). [...] biomedical datasets, which includes 7 datasets from diverse domains: Drug-Target Interaction (DTI) with datasets DAVIS and KIBA (Davis et al., 2011; Tang et al., 2014), Drug-Drug Interaction (DDI) with Twosides (Tatonetti et al., 2012), Protein-Protein Interaction (PPI) with Hu RI (Luck et al., 2020), Peptide-MHC Binding Prediction (PEPMHC) with MHC-I (Nielsen & Andreatta, 2016), Micro RNATarget Interaction (MTI) with mi RTar Base (Chou et al., 2018), and TCR-Epitope Binding Affinity (TCR) with Weber (Weber et al., 2021). |
| Dataset Splits | Yes | The edge splits for training, validation, and testing datasets were uniformly distributed across all methods using an 80/10/10 ratio. |
| Hardware Specification | Yes | Our experimental setup consisted of a server equipped with two NVIDIA A6000 GPUs running Ubuntu 20.04. |
| Software Dependencies | No | In this study, we utilized PyTorch to implement our methodology. Our experimental setup consisted of a server equipped with two NVIDIA A6000 GPUs running Ubuntu 20.04. The paper mentions PyTorch and Ubuntu but does not specify their version numbers. |
| Experiment Setup | Yes | We configured the hidden dimension of N2GON as 32 for the nine datasets. We tune the hops k from {1, 2, ..., 6}. We determined the layer counts for Encoder I and Encoder II, denoted as L1 and L2, by selecting from the set {1, 2, 3}. The selection of the probability parameter α ranged from {0.1, ..., 0.6}, while the parameter temperature τ was chosen from a range of {0.1, ..., 0.5}. For the training phase, the Adam optimizer (Kingma & Ba, 2014) was utilized. |