Enhancing Spectral GNNs: From Topology and Perturbation Perspectives

Authors: Taoyang Qin, Ke-Jia Chen, Zheng Liu

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on benchmark datasets for node classification demonstrate that incorporating the perturbed sheaf Laplacian enhances the performance of spectral GNNs. 6. Experiment In this section, we give the results of the performance comparison between PSL-GNN and GNN baselines on the node classification task. Besides, we validate the effectiveness of the proposed perturbed sheaf Laplacian matrix through ablation studies.
Researcher Affiliation Academia 1School of Computer Science, Nanjing University of Posts and Telecommunications, Nanjing, China 2Jiangsu Key Laboratory of Big Data Security & Intelligent Processing, Nanjing, China. Correspondence to: Ke-Jia CHEN <EMAIL>, Zheng Liu <EMAIL>.
Pseudocode No No explicit pseudocode or algorithm blocks were found in the paper. The methodology is described through text, equations, and diagrams like Figure 3 (Model workflow).
Open Source Code No No explicit statement or link for the open-sourcing of the methodology described in this paper is provided. The paper mentions using third-party tools like 'torch-householder' and 'Py Torch Geometric' but not the authors' own implementation.
Open Datasets Yes We use seven benchmark datasets, categorized as follows: (1) Citation Networks: Cora (Mc Callum et al., 2000), Citeseer (Giles et al., 1998), and Pubmed (Sen et al., 2008); (2) Co-purchase Networks: Photo (Shchur et al., 2018); (3) Webpage Networks: Texas and Cornell (Shchur et al., 2018); (4) Actor Co-occurrence Network: Actor (Shchur et al., 2018).
Dataset Splits Yes We train all models in a fully-supervised split (60% / 20% / 20%).
Hardware Specification Yes For model training, we use the Adam optimizer to optimize all models on an NVIDIA Ge Force RTX 4090 GPU.
Software Dependencies Yes We construct every reflection matrix Fi (i,j) based on the method in (Obukhov, 2021). The corresponding reference is: Obukhov, A. Efficient householder transformation in pytorch, 2021. URL www.github.com/toshas/ torch-householder. Version: 1.0.1, DOI: 10.5281/zenodo.5068733.
Experiment Setup Yes We set the learning rate to 0.05, weight decay to 5e 4, and the number of hidden units to 64. We set the order of the polynomial filters Bern Net, APPNP, Graph-Heat, and Jacobi to 10. For all PSL-GNN models, we search η within {1e 1, 1e 2, 1e 3, 1e 4} and d within {2, 3, 4} to achieve the best model performance. Additionally, we apply an early stopping mechanism with a maximum of 1500 epochs and a patience threshold of 100.