Unlocking the Potential of Black-box Pre-trained GNNs for Graph Few-shot Learning

Authors: Qiannan Zhang, Shichao Pei, Yuan Fang, Xiangliang Zhang

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on real-world datasets for few-shot node classification validate the effectiveness of our proposed method in the black-box setting.
Researcher Affiliation Academia 1Cornell University, USA 2University of Massachusetts Boston, USA 3Singapore Management University, Singapore 4University of Notre Dame, USA EMAIL, EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes the proposed model and optimization steps using mathematical equations and textual explanations, but it does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Implementation can be found at https://github.com/repograph/metabp.
Open Datasets Yes We leverage four real-world graph datasets for experimental evaluation following previous works (Zhou et al. 2019; Wu et al. 2022), including Cora (Yang, Cohen, and Salakhudinov 2016), Amazon Computers (Zhang et al. 2022b), Cora-full (Bojchevski and G unnemann 2018), and OGBN-arxiv (Hu et al. 2020a).
Dataset Splits Yes For dataset splitting (train/val/test), we used ratios of 3/2/2 for Cora, 4/3/3 for Computers, 25/20/25 for Cora-Full, and 20/10/10 for OGBN-Arxiv.
Hardware Specification Yes We implement Meta-BP in Py Torch with an NVIDIA Tesla V100 GPU and use a two-layer DGI of 256 hidden units as the black-box pretrained GNN
Software Dependencies No The paper mentions 'Py Torch' as a software framework but does not provide a specific version number. No other software dependencies with version numbers are listed.
Experiment Setup Yes Dimensions of the learnable transformation layer in GML upon node representations are determined via a grid search over {4, 8, 32, 64, 128}. The neural estimator is established as a two-layer MLP with 64 units. β is 1.0 for the information bottleneck regularization and α is 0.1 for meta-optimization. Learning rates of all models are searched from {0.01, 0.005, 0.001, 0.0005, 0.0001}. MAML-based approaches including Meta-BP adopt two fast updates with a step size of 0.05, except that on Amazon Computers it applies 0.01 as the step size.