Edge Prompt Tuning for Graph Neural Networks

Authors: Xingbo Fu, Yinhan He, Jundong Li

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on ten graph datasets under four pre-training strategies demonstrate the superiority of our proposed method against six baselines. Our code is available at https://github.com/xbfu/Edge Prompt. We provide comprehensive theoretical analyses of our method regarding its capability of handling node classification and graph classification as downstream tasks. Extensive experiments on ten graph datasets under four pre-training strategies demonstrate the superiority of our proposed method against six baselines.
Researcher Affiliation Academia Xingbo Fu University of Virginia EMAIL Yinhan He University of Virginia EMAIL Jundong Li University of Virginia EMAIL
Pseudocode No The paper describes mathematical formulations and methods in paragraph text and equations (e.g., Equation 1, 2, etc.) but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Our code is available at https://github.com/xbfu/Edge Prompt.
Open Datasets Yes We evaluate the effectiveness of our proposed method on node classification over five public graph datasets, including Cora (Yang et al., 2016), Cite Seer (Yang et al., 2016), Pubmed (Yang et al., 2016), ogbn-arxiv (Hu et al., 2020a), and Flickr (Zeng et al., 2020). In addition, we adopt five graph datasets from TUDataset (Morris et al., 2020), including ENZYMES, DD, NCI1, NCI109, and Mutagenicity, to conduct experiments for graph classification.
Dataset Splits Yes We use the 5-shot setting for node classification tasks and the 50-shot setting for graph classification tasks. We conduct experiments five times with different random seeds and report the average results in our experiments.
Hardware Specification No The paper specifies GNN models (2-layer GCN, 5-layer GIN) and general training parameters, but no specific hardware components (e.g., GPU models, CPU types, memory) are mentioned for running the experiments.
Software Dependencies No The paper mentions using an Adam optimizer, but no specific software library names with version numbers (e.g., PyTorch 1.9, TensorFlow 2.x, Python 3.x) are provided to replicate the software environment.
Experiment Setup Yes We use a 2-layer GCN (Kipf & Welling, 2017) as the backbone for node classification tasks and a 5-layer GIN (Xu et al., 2019) as the backbone for graph classification tasks. The size of hidden layers is set as 128. The classifier adopted for downstream tasks is linear probes for all the methods. We use an Adam optimizer (Kingma & Ba, 2015) with learning rates 0.001 for all the methods. The batch size is set as 32. The number of epochs is set to 200 for graph prompt tuning. The default number of anchor prompts at each GNN layer is 10 for node classification tasks and 5 for graph classification tasks.