Self-Discriminative Modeling for Anomalous Graph Detection
Authors: Jinyu Cai, Yunhe Zhang, Jicong Fan
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on 12 different graph benchmarks demonstrated that the three variants of SDM consistently outperform the state-of-the-art GLAD baselines. |
| Researcher Affiliation | Academia | 1Institute of Data Science, National University of Singapore, Singapore 2Department of Computer and Information Science, SKL-IOTSC, University of Macau, Macau, China 3School of Data Science, The Chinese University of Hong Kong, Shenzhen, China. |
| Pseudocode | Yes | To facilitate the understanding of the training procedure of the proposed SDM methods, we provide the detailed algorithms of three SDM variants in Algorithms 1, 2, and 3. |
| Open Source Code | No | The information is insufficient. The paper does not contain an explicit statement about releasing source code, nor does it provide a link to a code repository. |
| Open Datasets | Yes | All the datasets used in our experiment are sourced from TUDataset (Morris et al., 2020), a publicly available graph database2. For more details for each dataset, please refer to Appendix B. 2https://chrsmrrs.github.io/datasets/docs/datasets/ |
| Dataset Splits | Yes | Data Split: For small and moderate-scale datasets, we allocate 80% of the data from the normal class for training, and subsequently construct the testing data by combining the remaining normal data with an equal or smaller number of anomalous data samples. For large-scale imbalanced datasets, we allocate 80% of the data in the normal class as the training set, and form the test set with the rest of the normal data and all the abnormal data. |
| Hardware Specification | Yes | Implementation: We leverage Py Torch Geometric (Fey & Lenssen, 2019) for implementation, and all experiments are executed on an NVIDIA Tesla A100 GPU with an AMD EPYC 7532 CPU. |
| Software Dependencies | No | The paper mentions 'Py Torch Geometric (Fey & Lenssen, 2019)' for implementation and refers to 'RMSprop (Tieleman et al., 2012)' and 'Adam (Kingma & Ba, 2014)' as optimizers, but does not provide specific version numbers for these software components or the programming language/environment used (e.g., Python, PyTorch, CUDA versions). |
| Experiment Setup | Yes | Training Details: For small-scale graph datasets, the grid search strategy is utilized to find the optimal performance, where the coefficients (λ and γ) vary in {0.1, 1, 10}, and the batch size varies in {4, 8, 16}, while we increase the batch size to 256 to accommodate the requirement of experiment on larger-scale datasets. We utilize RMSprop (Tieleman et al., 2012) as the optimizer for SDM-ATI and SDM-ATII, and Adam (Kingma & Ba, 2014) for SDM-NAT during training. Besides, we set the learning rate ρ to 0.001 with the total training epochs to 300. |