Self-supervised Adversarial Purification for Graph Neural Networks
Authors: Woohyun Lee, Hogun Park
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments across diverse datasets and attack scenarios demonstrate the state-of-the-art robustness of GPR-GAE, showcasing it as an independent plug-and-play purifier for GNN classifiers. |
| Researcher Affiliation | Academia | 1Department of Computer Science and Engineering, Sungkyunkwan University, Suwon, South Korea. |
| Pseudocode | Yes | E. Algorithms Algorithm 1 Training of GPR-GAE Algorithm 2 Multi-Step Purification with GPR-GAE |
| Open Source Code | Yes | Our code can be found in https://github.com/woodavid31/GPR-GAE. |
| Open Datasets | Yes | We conducted experiments on various datasets including Cora, Cora ML, Citeseer (Bojchevski & G unnemann 2018), Pubmed (Sen et al. 2008), OGB-ar Xiv (Hu et al. 2020), and Chameleon with removed duplicates (Platonov et al. 2023). |
| Dataset Splits | Yes | We use an inductive split with 20 labeled nodes per class for train and validation, a stratified test set of 10% of nodes, and the remaining nodes as unlabeled training data. For Chameleon and OGB-ar Xiv, we use their provided splits with fully labeled training sets. |
| Hardware Specification | Yes | All experiments are conducted on an NVIDIA A100 (80GB) GPU. However, it is worth noting that GPR-GAE can be trained and applied to datasets, including OGB-ar Xiv, using an NVIDIA RTX A5000 (24GB). |
| Software Dependencies | No | The paper describes the models used and their configurations (e.g., Two-layer GCN, MLP with 64 hidden units), but does not specify software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow, CUDA versions). |
| Experiment Setup | Yes | GPR-GAE is trained using the ADAM optimizer with a learning rate of 0.01 and a weight decay of 0.0001. Training is conducted for 2000 epochs. ... When training the classifiers, a maximum of 3000 epochs is used for training, using the Adam optimizer with a learning rate of 0.01, weight decay of 0.001, and tanh Margin loss. An early stop method is used with a patience of 200 epochs. |