Implicit Subgraph Neural Network

Authors: Yongjian Zhong, Liao Zhu, Hieu Vu, Bijaya Adhikari

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our approach on real-world networks against state-of-the-art baselines, demonstrating its effectiveness and superiority. Our code is avaliable https: //github.com/MLon Graph/ISNN 4. Experiments Dataset: We assess our model s performance and scalability by benchmarking it against various subgraph classification baselines on four real-world datasets. The results are presented in Tables 2 and 3, demonstrating that our model outperforms baselines across almost all datasets and metrics.
Researcher Affiliation Academia 1Department of Computer Science, University of Iowa, Iowa City, USA. Correspondence to: Yongjian Zhong <EMAIL>, Bijaya Adhikari <EMAIL>.
Pseudocode Yes Algorithm 1 ISNN Training Algorithm 1: Input: Graph ˆG = (V Vs, E Es Ens, ˆX), Learning rate η, hyperparameter γ; 2: Z1 0; 3: for i=1,...,T do 4: ˆZi 0 Zi 5: for j=1,...,K do 6: ˆZi j f(ˆZi j 1, ˆG; ); 7: end for 8: g := g(xi, Zi) gk(xi, ˆZi K) 9: Fγ := F(xi; Zi) + γ g 10: (xi+1, Zi+1) Proj (xi, Zi) η Fγ) 11: end for 12: Return: (x T , ZT ).
Open Source Code Yes Our code is avaliable https: //github.com/MLon Graph/ISNN
Open Datasets Yes Dataset: We assess our model s performance and scalability by benchmarking it against various subgraph classification baselines on four real-world datasets. Following the experimental setup of Sub GNN (Alsentzer et al., 2020), we evaluate our approach on PPI-BP, HPO-METAB, HPO-NEURO, and EM-USER. The statistics of the datasets are summarized in Table 1.
Dataset Splits No The paper mentions using specific datasets (PPI-BP, HPO-METAB, HPO-NEURO, EM-USER) and following the experimental setup of Sub GNN (Alsentzer et al., 2020). However, it does not explicitly state the training, validation, or test split percentages or sample counts within the main text.
Hardware Specification Yes We conducted experiments on AMD EPYC 7763 64-Core Processor with 2 TB memory and on 8 NVIDIA A30 GPUs.
Software Dependencies No The paper mentions employing a 2-layer GNN and a 2-layer MLP for classification, but it does not specify any software libraries (e.g., PyTorch, TensorFlow) or their version numbers.
Experiment Setup Yes Hyperparameter: For all datasets, we employ a 2-layer GNN and a 2-layer MLP for classification, with a fixed hidden dimension of 64, consistent with the Sub GNN settings. For our method, we vary the value of γ in {0.0001, 0.001, 0.01} and the number of inner-loop iterations K in {1, 2, 3, 4, 5}. Configuration: We rerun each experiment 10 times (max of 1500 epochs) and report the average performance.