AutoSGNN: Automatic Propagation Mechanism Discovery for Spectral Graph Neural Networks
Authors: Shibing Mo, Kai Wu, Qixuan Gao, Xiangyi Teng, Jing Liu
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on nine widely-used datasets, encompassing both homophilic and heterophilic graphs, demonstrate that Auto SGNN outperforms state-of-the-art spectral GNNs and graph neural architecture search methods in both performance and efficiency. [...] We conducted a comprehensive evaluation of Auto SGNN across nine widely studied graph datasets. Extensive experimental results demonstrate that Auto SGNN exhibits exceptional competitiveness in both algorithmic performance and time complexity, outperforming or matching state-of-the-art spectral GNNs and GNN-NAS methods. |
| Researcher Affiliation | Academia | 1School of Artifcial Intelligence, Xidian University 2Guangzhou Institute of Technology, Xidian University EMAIL, EMAIL, EMAIL, EMAIL, EMAIL |
| Pseudocode | No | The paper includes a schematic of 'Individual Content Schematic' showing 'Design Ideas Description' and a 'Code' block with comments like 'Algorithm......' for the *generated SGNNs*, not a clearly labeled algorithm block or pseudocode for the Auto SGNN framework itself. The framework's process is described textually and visually in Figure 2 without algorithmic steps. |
| Open Source Code | Yes | Code https://github.com/Explorermomo/AAAI2025Auto SGNN |
| Open Datasets | Yes | Experiment Setting Datasets We utilize five widely-used homophilic graphs, including the citation graphs Cora, Cite Seer, and Pub Med, as well as the Amazon co-purchase graphs Computers and Photo. In addition, we employed four heterophilic benchmark datasets, including the Wikipedia graphs Chameleon and Squirrel, and the webpage graphs Texas and Cornell from Web KB. |
| Dataset Splits | Yes | Following (Zheng et al. 2024; Chien et al. 2021), for the node classification task under the transductive setting, we randomly sparse split the node set into train/val/test datasets with ratio 2.5%:2.5%:95%. We have also included the experimental results for the randomly dense split (60%:20%:20%) in Appendix F. |
| Hardware Specification | Yes | All search experiments are conducted on a machine equipped with four NVIDIA Ge Force RTX 3090 GPUs, two Intel(R) Xeon(R) Silver 4210 CPUs (2.20 GHz), and 252GB of RAM. |
| Software Dependencies | No | The paper mentions using 'GPT-3.5-turbo pre-trained LLM' and 'Chat GPT 4o' for generating content, and implies the use of PyTorch and PyTorch Geometric in code snippets (Figure 3). However, it does not provide specific version numbers for any software dependencies required to replicate the experiments. |
| Experiment Setup | Yes | For Auto SGNN, the iterative search algebra is set to 30, and the evolutionary strategy prompts for E1 and E2 both contain P1 = P2 = 4 elite individuals. The GPT-3.5-turbo pre-trained LLM is used, with the parallel response number set to 4. Through parallel experiments with 3 prompts, the number of candidate individuals generated per cycle is 12. We use node classification accuracy on the validation dataset as the fitness metric, saving the top 30 individuals per cycle as elite individuals. We employ 2-layer generated spectral GNN layers and use 2-layer MLP for feature transformation. Similar to other spectral GNNs baselines, we follow the experimental setup in (Zheng et al. 2024) and use the best hyperparameter combinations provided in the original papers for each dataset. For GNN-NAS baselines, we follow the experimental setup in (Liu and Liu 2023), and like Auto SGNN, we set the population size and iterative search algebra to 30. To prevent overfitting during the evaluator s training for each SGNN, we set an early stopping criterion of 200 epochs. All experiments, in order to remove the effects of randomness, are run independently for 10 trials and the resulting means and standard deviations are reported. [...] we set a timeout duration (Timeout Duration = 600s). |