Training-free Graph Neural Networks and the Power of Labels as Features
Authors: Ryoma Sato
TMLR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In the experiments, we confirm that TFGNNs outperform existing GNNs in the training-free setting and converge with much fewer training iterations than traditional GNNs. |
| Researcher Affiliation | Academia | Ryoma Sato EMAIL National Institute of Informatics |
| Pseudocode | No | The paper defines the TFGNN architecture and its initialization using mathematical equations (19-29) rather than a separate pseudocode or algorithm block. |
| Open Source Code | Yes | Reproducibility: Our code is available at https://github.com/joisino/laf. |
| Open Datasets | Yes | We use the Planetoid datasets (Cora, Cite Seer, Pub Med) [54], Coauthor datasets, and Amazon datasets [42] in the experiments. |
| Dataset Splits | Yes | We use 20 nodes per class for training, 500 nodes for validation, and the rest for testing in the Planetoid datasets following Kipf et al. [20], and use 20 nodes per class for training, 30 nodes per class for validation, and the rest for testing in the Coauthor and Amazon datasets following Shchur et al. [42]. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU or CPU models used for the experiments. |
| Software Dependencies | No | The paper mentions using Adam W for training but does not provide specific version numbers for any software libraries or frameworks used in the experiments. |
| Experiment Setup | Yes | We use three layered models with the hidden dimension 32 unless otherwise specified. We train all the models with Adam W [25] with learning rate 0.0001 and weight decay 0.01. |