Adversarial Robust Generalization of Graph Neural Networks

Authors: Chang Cao, Han Li, Yulong Wang, Rui Wu, Hong Chen

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experimental results on benchmark datasets provide evidence that supports the established theoretical findings. 5. Experiments In this section, we propose an adversarial training algorithm to learn robust GNNs based on our theoretical findings, and validate our theoretical results by evaluating the effect of several factors. Experimental Setup. We adopt six benchmark datasets provided by Py Torch Geometric, including Citeseer, Cora, Pubmed, DBLP, CS, and Cora Full (see Table 2 for more details).
Researcher Affiliation Collaboration 1College of Informatics, Huazhong Agricultural University, Wuhan 430070, China 2Engineering Research Center of Intelligent Technology for Agriculture, Ministry of Education, China 3Horizon Robotics, Haidian District, Bei Jing 100190, China. Correspondence to: Han Li <EMAIL>.
Pseudocode Yes Algorithm 1 Train a robust graph model 1: Input: Graph G, dataset S, perturbation budget θ, regularization parameter λ, initialization W 0, learning rate η, number of iterations T. 2: while t < T do 3: e S . 4: for i = 1, 2, . . . , n do 5: For the input matrix Xt = [x1,t, . . . , xn,t], perturb Xt Xt + A(Xt, A, θ). 6: For each node in Xt = [ x1,t, . . . , xn,t], append {( xi,t, yi,t)}n i=1 to e St and choose m samples randomly to the training set e Sm,t. 7: end for 8: Define a new objective L(Wi,t) = 1 m P Xi,t Sm,t ℓ(fi,t(A, X, W), yi,t) + λ Wi,t . 9: For all i [m], update Wt using SGD: Wi,t+1 Wi,t η L(Wi,t). 10: end while 11: Output: W T
Open Source Code No The paper does not provide an explicit statement or link to its open-source code for the methodology described. It mentions using 'Py Torch Geometric' as a library, but not code for their specific method.
Open Datasets Yes We adopt six benchmark datasets provided by Py Torch Geometric, including Citeseer, Cora, Pubmed, DBLP, CS, and Cora Full (see Table 2 for more details).
Dataset Splits Yes Table 2. Details of the adopted datasets. Dataset Nodes Edges Features Classes Training Validation Test Citeseer 3327 9104 3703 6 20 per class 500 1000 Cora 2708 10556 1433 77 20 per class 500 1000 Pubmed 19717 88648 500 3 20 per class 500 1000 DBLP 17716 105734 1639 4 20 per class 30 per class Rest CS 18333 163788 6805 15 20 per class 30 per class Rest Cora Full 19793 126842 8710 70 20 per class 30 per class Rest
Hardware Specification Yes The implement is Ge Force RTX 3080 GPU.
Software Dependencies No The paper mentions 'Py Torch Geometric' and 'PyTorch' implicitly (through its datasets), but does not provide specific version numbers for these or other software dependencies.
Experiment Setup Yes We set training iterations T as 200 and use cross-entropy loss for training. SGD is adopted for optimization with its learning rate η set by 0.05 and a weight decay of 1e-3. The regularization parameter λ is fixed to 0.1. Unless otherwise indicated, we adopt a two-layer GNN with the ELU activation in each layer and log-softmax activation for output, where the number of hidden units is fixed to 64. γ and K in APPNP are set as 0.5 and 10, respectively. α in GCNII is set as 0.1 and β = log(ξ/L + 1), where L is the number of layers and ξ is fixed to 1. Notably, the learning rate of Cora Full is set as 0.2 with a weight decay of 1e-4.