Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]

Personalized Layer Selection for Graph Neural Networks

Authors: Kartik Sharma, Vineeth Rakesh, Yingtong Dou, Srijan Kumar, Mahashweta Das

TMLR 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Results on 10 datasets and 3 different GNNs show that we significantly improve the node classification accuracy of GNNs in a plug-and-play manner. We also find that using variable layers for prediction enables GNNs to be deeper and more robust to poisoning attacks.
Researcher Affiliation Collaboration Kartik Sharma EMAIL Georgia Institute of Technology; Vineeth Rakesh EMAIL Visa Research; Yingtong Dou EMAIL Visa Research; Srijan Kumar EMAIL Georgia Institute of Technology; Mahashweta Das EMAIL Visa Research
Pseudocode Yes Algorithm 1 describes the training steps in more detail.
Open Source Code No Code will be open-sourced after publication.
Open Datasets Yes We consider 4 standard homophilic co-citation network datasets Cora, Citeseer, Pubmed (Kipf & Welling, 2016) and ogbn-arxiv (ogba) (Hu et al., 2020), where each node represents a paper that is classified based on its topic area. We also used 6 heterophilic datasets Actor, Chameleon, Squirrel, Cornell, Wisconsin, Texas (Pei et al., 2020).
Dataset Splits Yes Following Pei et al. (2020), we evaluate the models on 10 different random train-val-test splits for all the datasets except ogba, where we used the standard OGB split.
Hardware Specification Yes All the experiments were conducted on Python 3.8.12 on a Ubuntu 18.04 PC with an Intel(R) Xeon(R) CPU E5-2698 v4 @ 2.20GHz processor, 512 GB RAM, and Tesla V100-SXM2 32 GB GPUs.
Software Dependencies No All the experiments were conducted on Python 3.8.12 on a Ubuntu 18.04 PC with an Intel(R) Xeon(R) CPU E5-2698 v4 @ 2.20GHz processor, 512 GB RAM, and Tesla V100-SXM2 32 GB GPUs. The paper mentions using GNNs from 'pytorch-geometric.readthedocs.io' but does not specify the version of PyTorch Geometric or PyTorch itself.
Experiment Setup Yes All the models were trained using an Adam optimizer for 500 epochs with the initial learning rate tuned between {0.01, 0.001}. The best-trained model was chosen using the validation accuracy and in the case of multiple splits, the mean validation accuracy across splits.