Rethinking Graph Neural Networks From A Geometric Perspective Of Node Features
Authors: Feng Ji, Yanan Zhao, KAI ZHAO, Hanyang Meng, Jielong Yang, Wee Peng Tay
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 5 EXPERIMENTS... The results are shown in Table 2. Among 48 comparisons, the above tricks significantly (in terms of p-value) improve the performance in 34 instances, and the improvement is insignificant in 14 instances, mainly for Chameleon, Squirrel, and Actor datasets. |
| Researcher Affiliation | Academia | 1School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 2School of Internet of Things Engineering, Jiangnan University, Wuxi, China |
| Pseudocode | No | The paper describes methods using mathematical formulas and textual explanations rather than structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The source code to reproduce our results can be found at https://github.com/Yanan Zhao0630/M-AE-M-AEN (base model: ACM-GCN). |
| Open Datasets | Yes | The following datasets are used and studied at various places in the paper (including the appendices): Cora, Citeseer, Pub Med, Ogbn-arxiv, Texas, Cornell, Wisconsin, Chameleon, Squirrel, Actor, Penn94, ar Xiv-year, and genius (see Lim et al. (2021)). |
| Dataset Splits | Yes | Table 6: Dataset statistics... Data splits: standard, 48%/32%/20%, 50%/25%/25% |
| Hardware Specification | Yes | Experiments are performed on a workstation with a single NVIDIA Ge Force RTX 3090 GPU and 24GB memory. |
| Software Dependencies | No | The paper mentions several GNN models (GCN, GAT, ACM-GCN, Graph CON, CDE, Glo GNN) and states that experiments are performed on a workstation, but does not specify software dependencies with version numbers (e.g., Python, PyTorch, CUDA versions). |
| Experiment Setup | Yes | D.2 HYPERPARAMETERS There are two hyperparameters used in the tricks in Section 5: η for the edge addition probability and E for the number of epochs in training. They are tuned according to the following general procedure. For η, we will first consider η = 1, 0.5, 0.2, 0.1, 0.001, 0.0001 and then fine-tune around one of these values using the validation set... For the number of epochs, let E0 be the number of epochs of the base model (E0 = 1000 for CDE and Glo GNN and = 200 for other base models). We consider E = E0/20, E0/10, E0/4, E0/2 and then fine-tune around one of these values. |