Sparse Decomposition of Graph Neural Networks

Authors: Yaochen Hu, Mai Zeng, Ge Zhang, Pavel Rumiantsev, Liheng Ma, Yingxue Zhang, Mark Coates

TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate via extensive experiments that our method outperforms other baselines designed for inference speedup, achieving significant accuracy gains with comparable inference times for both node classification and spatio-temporal forecasting tasks.
Researcher Affiliation Collaboration Yaochen Hu EMAIL Huawei Noah s Ark Lab, Montreal, Canada Mai Zeng EMAIL Mc Gill University & Mila & ILLS , Montreal, Canada Ge Zhang EMAIL Huawei Noah s Ark Lab, Toronto, Canada
Pseudocode Yes Algorithm 1 SDGNN Computation
Open Source Code No The paper does not provide an explicit statement about releasing the code or a direct link to a code repository for the methodology described in this paper.
Open Datasets Yes Following Zhang et al. (2021) and Tian et al. (2023), we conduct experiments on five widely used benchmark datasets from Shchur et al. (2018), namely, Cora, Citeseer, Pubmed, Computer and Photo. We also examine performance on two large-scale datasets, Arxiv and Products, from the OGB benchmarks (Hu et al., 2020).
Dataset Splits Yes For the 5 small datasets (Cora, Citeseer, Pubmed, Computer and Photo), we randomly split the nodes with a 6:2:2 ratio into training, validation and testing sets. Experiments are conducted using 10 random seeds, as in Pei et al. (2020). For Arxiv and Products, we follow the fixed predefined data splits specified in Hu et al. (2020), run the experiments 10 times, and report the mean and standard deviation.
Hardware Specification Yes We ran all the tests on the computer equipped with Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz CPU and NVIDIA Tesla V100 GPU.
Software Dependencies No The paper mentions "We adopted the implementation from the scikit-learn library" but does not specify its version or any other software dependencies with version numbers.
Experiment Setup Yes Table 5: Summary of the hyper-parameters for base GNN models. Table 6: Summary of the hyper-parameters in the student MLP models for GLNN. Table 7: Summary of the hyper-parameters in the student MLP models for NOSMOG. Table 8: Summary of the hyper-parameters for Co HOp. Table 9: Summary of the Hyper-parameters for Training SGDNN. Table 12: Sets of hyperparameters for GRU-GCN in spatio-temporal setting experiment. Table 13: Sets of hyperparameters for SDGNN in spatio-temporal setting experiment.