SpectralGap: Graph-Level Out-of-Distribution Detection via Laplacian Eigenvalue Gaps
Authors: Jiawei Gu, Ziyue Qiao, Zechao Li
IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To comprehensively evaluate the effectiveness of the Spec Gap method, we have designed a series of experiments. This section will detail the experimental setup, main results analysis, and in-depth ablation studies. Our extensive experiments demonstrate that Spec Gap significantly outperforms existing methods, reducing the average false positive rate (FPR95) by 15.40% compared to the previous best approach. |
| Researcher Affiliation | Academia | 1School of Computer Science and Engineering, Nanjing University of Science and Technology 2School of Computing and Information Technology, Great Bay University 3Dongguan Key Laboratory for Intelligence and Information Technology EMAIL, EMAIL, EMAIL |
| Pseudocode | Yes | The algorithm proceeds by iteratively constructing a sequence of orthonormal vectors {qj}k j=1 and scalars {αj}k j=1 and {βj}k 1 j=1. At each iteration, the algorithm computes: w = Lqj βj 1qj 1, (9) αj = w T qj, (10) w = w αjqj, (11) βj = w 2, (12) qj+1 = w/βj. (13) |
| Open Source Code | No | The implementation code will be released upon acceptance. |
| Open Datasets | Yes | Our experiments utilize five pairs of datasets, representing indistribution (ID) and out-of-distribution (OOD) data respectively. These pairs are selected from the TU datasets[Morris et al., 2020] and Open Graph Benchmark (OGB)[Hu et al., 2020], covering molecular, social network, and bioinformatics domains. |
| Dataset Splits | Yes | We follow the dataset split strategy from previous works[Liu et al., 2023]: 80% of ID graphs for training, and the remaining 20% split equally between validation and test sets. These latter sets are augmented with an equal number of OOD graphs, creating a realistic and challenging evaluation scenario. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. It only mentions 'The computational resources are supported by Song Shan Lake HPC Center (SSL-HPC) in Great Bay University'. |
| Software Dependencies | No | The paper does not provide specific software dependencies (library or solver names with version numbers) used to replicate the experiment. |
| Experiment Setup | No | The paper discusses the application of Spec Gap within GNN models and its position, but does not provide specific hyperparameters like learning rates, batch sizes, number of epochs, or optimizer settings for the GNN models themselves (GCL, JOAO, GIN, PPGN) which are stated as 'well-trained'. |