Non-backtracking Graph Neural Networks

Authors: Seonghyun Park, Narae Ryu, Gahee Kim, Dongyeop Woo, Se-Young Yun, Sungsoo Ahn

TMLR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Furthermore, we empirically verify the effectiveness of our NBA-GNN on the long-range graph benchmark and transductive node classification problems. Finally, we empirically evaluate our NBA-GNN on the long-range graph benchmark (Dwivedi et al., 2022) and transductive node classification problems (Sen et al., 2008; Pei et al., 2019).
Researcher Affiliation Academia 1POSTECH 2KAIST
Pseudocode No The paper describes the method using mathematical equations and provides an 'Implementation' section with an example of message-passing updates (Equation 5) for NBA-GCN, but it does not include a distinct block labeled as 'Pseudocode' or 'Algorithm'.
Open Source Code Yes The code is available at https://github.com/seonghyun26/nba-gnn
Open Datasets Yes Finally, we empirically evaluate our NBA-GNN on the long-range graph benchmark (Dwivedi et al., 2022) and transductive node classification problems (Sen et al., 2008; Pei et al., 2019). We validate our method using three datasets from the LRGB benchmark: Peptides-func (graph classification), Peptides-struct (graph regression), and Pascal VOC-SP (node classification). To validate the effectiveness of non-backtracking in transductive node classification tasks, we conduct experiments on three citation networks (Cora, Cite Seer, and Pubmed) (Sen et al., 2008) and three heterophilic datasets (Texas, Wisconsin, and Cornell) (Pei et al., 2019).
Dataset Splits Yes For the citation networks, we employed the dataset splitting procedure outlined in Yang et al. (2016). In contrast, for the heterophilic datasets, we randomly divided the nodes of each class into training (60%), validation (20%), and testing (20%) sets. Table 10: Statistics of datasets in LRGB. Pascal VOC-SP [...] Splits 75/12.5/12.5 Peptides-func [...] Splits 70/15/15 Peptides-struct [...] Splits 70/15/15
Hardware Specification Yes All experiments were conducted on a single RTX 3090.
Software Dependencies No The paper mentions using an Adam W optimizer (Loshchilov & Hutter, 2018) but does not provide version numbers for any software, libraries, or programming languages.
Experiment Setup Yes We use an Adam W optimizer (Loshchilov & Hutter, 2018) with lr decay=0.1 , min lr=1e-5, momentum=0.9, and base learning rate lr=0.001 (0.0005 for Pascal VOC-SP). We use cosine scheduler with reduce factor=0.5 , schedule patience=10 with 50 warm-up. We searched layers 6 to 12 for Pascal VOC-SP 2, layers 5 to 20 for Peptides-func and Peptides-struct. The hidden dimension was chosen by the maximum number in the parameter budget. Dropout was searched from 0.0 0.8 for Pascal VOC-SP in steps of 0.1, and 0.1 0.4 in steps of 0.1 for Peptides-func and Peptides-struct. We used the batch size of 30 for Pascal VOC-SP on GPU memory, and 200 for Peptides-func and Peptides-struct. The training duration spanned 1,000 epochs for citation networks and 100 epochs for heterophilic datasets. The model s hidden dimension and dropout ratio were set to 512 and 0.2, respectively, consistent across all datasets, after fine-tuning these hyperparameters on the Cora dataset. Additionally, we conducted optimization for the number of convolutional layers within the set {1, 2, 3, 4, 5}.