Robustness Inspired Graph Backdoor Defense

Authors: Zhiwei Zhang, Minhua Lin, Junjie Xu, Zongyu Wu, Enyan Dai, Suhang Wang

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on real-world datasets show that our framework can effectively identify poisoned nodes, significantly degrade the attack success rate, and maintain clean accuracy when defending against various types of graph backdoor attacks with different properties.
Researcher Affiliation Academia Zhiwei Zhang Minhua Lin Junjie Xu Zongyu Wu Enyan Dai Suhang Wang The Pennsylvania State University EMAIL
Pseudocode Yes Algorithm 1 Algorithm of RIGBD
Open Source Code Yes Our code is available at: github.com/zzwjames/RIGBD.
Open Datasets Yes We conduct experiments on three benchmark datasets widely used for node classification, i.e., Cora, Citeseer, Pubmed (Sen et al., 2008), Physics (Sinha et al., 2015), Flickr (Zeng et al., 2019) and OGB-arxiv (Hu et al., 2020).
Dataset Splits Yes Following existing representative graph backdoor attacks (Dai et al., 2023; Zhang et al., 2024), we split the graph into two disjoint subgraphs, GT and GU, with an 80 : 20 ratio.
Hardware Specification Yes All models are trained on an A6000 GPU with 48G memory.
Software Dependencies No The paper mentions 'Py Torch s scatter function' but does not specify its version or any other software dependencies with version numbers required for reproduction.
Experiment Setup Yes The model architecture is a 2-layer GCN (Kipf & Welling, 2016). The number of iterations for random edge dropping is set to K = 20, with a drop ratio of β = 0.5. All hyperparameters of all methods are tuned based on the validation set for fair comparison.