Provably Robust Explainable Graph Neural Networks against Graph Perturbation Attacks
Authors: Jiate Li, Meng Pang, Yun Dong, Jinyuan Jia, Binghui Wang
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Evaluation results on multiple graph datasets and GNN explainers show the effectiveness of XGNNCert. |
| Researcher Affiliation | Academia | Jiate Li1, Meng Pang2, Yun Dong3, Jinyuan Jia4, Binghui Wang1 1Illinois Institute of Technology, USA, 2Nanchang University, China 3Milwaukee School of Engineering, USA 4The Pennsylvania State University, USA |
| Pseudocode | Yes | B PSEUDO CODE ON XGNNCERT Here we provide the pseudo code of our XGNNCert, shown in Algorithm 1. |
| Open Source Code | Yes | Source code is available at https://github.com/Jet Richard Lee/XGNNCert. |
| Open Datasets | Yes | As suggested by (Agarwal et al., 2023), we choose datasets with groundtruth explanations for evaluation. We adopt the synthetic dataset "SG-Motif", where each graph has a label and Motif" is the groundtruth explanation that can be "House", "Diamond", and "Wheel". We also adopt two real-world graph datasets (i.e., Benzene and FC) with groundtruth explanations from Agarwal et al. (2023). |
| Dataset Splits | Yes | For each dataset, we randomly sample 70% graphs for training, 10% for validation, and use the remaining 20% graphs for testing. |
| Hardware Specification | No | The paper does not explicitly describe the hardware used for running its experiments, such as specific GPU or CPU models. |
| Software Dependencies | No | The paper mentions implementing explainers and classifiers using publicly available source code and specific hyperparameters for GNN explainers (PGExplainer, GSAT, Refine) and GNN classifiers (GCN, GSAGE, GIN), but it does not specify software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | Our base GNN classifiers are all 3-layer architectures with 128 hidden neurons, the learning rate as 0.001, and the epochs as 1000. For base GNN explainers, we simply use the configured hyperparameters in their source code. We set their hidden sizes as 64, coefficient sizes as 0.0001, coefficient entropy weights as 0.001, learning rates as 0.01, and epochs as 20. We set λ = 3, p = 0.3, T = 70, γ = 0.3 and k as Table 5. |