Backdoor Attack on Vertical Federated Graph Neural Network Learning

Authors: Jirui Yang, Peng Chen, Zhihui Lu, Jianping Zeng, Qiang Duan, Xin Du, Ruijun Deng

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results demonstrate that BVG achieves nearly 100% attack success rates across three commonly used datasets and three GNN models, with minimal impact on the main task accuracy. We also evaluated various defense methods, and the BVG method maintained high attack effectiveness even under existing defenses.
Researcher Affiliation Academia 1Fudan University, China 2Nanjing University of Information Science and Technology, China 3Pennsylvania State University, USA 4Zhejiang University, China
Pseudocode Yes Algorithm 1 Multi-hop Trigger Generation
Open Source Code Yes Our implementation is publicly available at https://github.com/yangjr01/VFGNN_backdoor.
Open Datasets Yes This paper uses three widely adopted public datasets to evaluate the performance of BVG, including Cora [Mc Callum et al., 2000], Cora ml [Mc Callum et al., 2000], and Pubmed [Sen et al., 2008]. The basic dataset statistics are summarized in Table 1.
Dataset Splits Yes In inductive node classification tasks, only a subset of nodes VL in the training graph have labels YL = {y1, . . . , yNL}. The test nodes VT are disjoint from the training nodes. [...] We randomly split the edges of the graphs into equal parts, one for each participant, without overlapped edges between any two participants. [...] We assume the adversary knows only four target class nodes (i.e., |Vp| = 4), which are randomly selected from the training set.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions 'VFGNN is trained using Adam' but does not provide specific ancillary software details (e.g., library or solver names with version numbers like Python 3.8, PyTorch 1.9, or CUDA 11.1) needed to replicate the experiment.
Experiment Setup Yes To ensure a fair comparison with previous studies [Chen et al., 2022], we use a two-layer GNN model for each participant s local GNN to extract local node embeddings, with the dimension set to 16. The number of hidden units is fixed at 32. For GCN and GAT, the activation function is Re LU. VFGNN is trained using Adam, with a learning rate of 0.01.