SA-MLP: Distilling Graph Knowledge from GNNs into Structure-Aware MLP
Authors: Jie Chen, Mingyuan Bai, Shouzhen Chen, Junbin Gao, Junping Zhang, Jian Pu
TMLR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on eight benchmark datasets under both transductive and online settings show that our SA-MLP can consistently achieve similar or even better results than teacher GNNs while maintaining as fast inference speed as MLPs. |
| Researcher Affiliation | Academia | Jie Chen EMAIL Fudan University Mingyuan Bai EMAIL RIKEN AIP Shouzhen Chen EMAIL Fudan University Junbin Gao EMAIL University of Sydney Junping Zhang EMAIL Fudan University Jian Pu EMAIL Fudan University |
| Pseudocode | No | The paper describes the model and methods using mathematical equations and textual descriptions, but no clearly labeled "Pseudocode" or "Algorithm" blocks are present. |
| Open Source Code | Yes | Our code is available at https://github.com/JC-202/SA-MLP. |
| Open Datasets | Yes | To evaluate the performance of the proposed SA-MLP, we consider eight public benchmark datasets, including three citation datasets Sen et al. (2008) (Cora, Citeseer, Pubmed), two larger OGB datasets Hu et al. (2020) (Arxiv, Products), and three heterophily datasets (Chameleon, Squirrel, Arxiv-year) Pei et al. (2020); Lim et al. (2021) |
| Dataset Splits | Yes | We used the standard public splits of OGB datasets, and ten frequently used fully supervised splits (48%/32%/20% of nodes per class for train/validation/test) provided by Pei et al. (2020); Zhu et al. (2020) of other datasets for a fair comparison and reproduction. |
| Hardware Specification | No | The paper discusses inference time comparisons but does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments or for training the models. |
| Software Dependencies | No | The paper mentions software like PyTorch and optimizers like Adam, but does not specify any version numbers for these software components or libraries. |
| Experiment Setup | Yes | Following the standard setting Hu et al. (2020); Bo et al. (2021), we fix the hidden dimension of SA-MLP as 128 for all datasets except 64 for Products. The number of layer in SA-MLP is 2 for all datasets for inference efficiency. We use Adam Kingma & Ba (2014) for optimization, Layer Norm Ba et al. (2016), and tune other hyper-parameters, including dropout rate from [0, 0.2, 0.5], learning rate from [0.01, 0.005, 0.05], weight decay from [0, 5e-4, 5e-5], δ from [0, 0.2, 0.5], and λ from [0.5, 0.8, 1] for distillation via validation sets of each dataset. |