Dynamic Spectral Graph Anomaly Detection

Authors: Jianbo Zheng, Chao Yang, Tairui Zhang, Longbing Cao, Bin Jiang, Xuhui Fan, Xiao-ming Wu, Xianxun Zhu

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on four datasets substantiate the efficacy of our DSGAD method, surpassing state-of-the-art methods on both homogeneous and heterogeneous graphs.
Researcher Affiliation Academia 1College of Computer Science and Electronic Engineering, Hunan University, China 2School of Computing, Macquarie University, Austrilia 3School of Computer Science and Engineering, Sun Yat-sen University, China 4School of Communication and Information Engineering, Shanghai University
Pseudocode No The paper describes the methodology using textual explanations and figures (e.g., Figure 1, Figure 2), but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Code https://github.com/IWant Be/Dynamic-Spectral Graph-Anomaly-Detection
Open Datasets Yes Our experiments use four datasets: T-finance (Tang et al. 2022), Tolokers (Likhobaba, Pavlichenko, and Ustalov 2023), Yelp Chi (Rayana and Akoglu 2015), and Amazon (Mc Auley and Leskovec 2013), as detailed in Table 1.
Dataset Splits Yes In this paper, to ensure fairness, the ratio of training set/validation set/test set for all methods is fixed at 0.4/ 0.3/ 0.3.
Hardware Specification Yes All methods are executed on a cloud server virtual machine equipped with 8 v CPUs (32G RAM) and one NVIDIA T4 Tensor Core GPU.
Software Dependencies Yes Our method leverages the Deep Graph Library (DGL 2.0.0) within Py Torch 2.2.1 with Cuda 11.8.
Experiment Setup Yes All methods are trained using the Adam optimizer with a learning rate of 0.01 for 100 epochs. Each method is executed 10 times, with the model s performance evaluated based on the mean and standard deviation of the evaluation metrics at the 100-th epoch. The parameter C, crucial for determining the number of wavelets, is set to 2. Hidden layers in all methods are set to 64 dimensions. The Conv1D layer has a convolutional kernel size of 3 and a stride of 1.