Adversarial Training for Graph Convolutional Networks: Stability and Generalization Analysis
Authors: Chang Cao, Han Li, Yulong Wang, Rui Wu, Hong Chen
IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on benchmark datasets confirm the validity of our theoretical findings, highlighting their practical significance. In this section, we perform experiments on the node classification task to evaluate the effect of these factors on adversarial generalization. |
| Researcher Affiliation | Collaboration | 1College of Informatics, Huazhong Agricultural University, Wuhan 430070, China 2Engineering Research Center of Intelligent Technology for Agriculture, Ministry of Education, Wuhan 430070, China 3Horizon Robotics, Haidian District, Beijing 100190, China EMAIL, |
| Pseudocode | Yes | Algorithm 1 Train a robust graph model under node attacks |
| Open Source Code | No | The paper does not explicitly state that the authors are releasing their code or provide a link to a code repository. |
| Open Datasets | Yes | We adopt several widely-used benchmark datasets, including Cora, Citeseer, Pubmed, DBLP, CS, and Cora Full [Yang et al., 2016; Bojchevski and G unnemann, 2017; Xue et al., 2021b]. An overview is given in Table 2. |
| Dataset Splits | Yes | Table 2: Details of datasets. Dataset Node Edges Features Class Training Validation Test Citeseer 3327 9104 3703 6 20 per class 500 1000 Cora 2708 10556 1433 7 20 per class 500 1000 Pubmed 19717 88648 500 3 20 per class 500 1000 DBLP 17716 105734 1639 4 20 per class 30 per class Rest CS 18333 163788 6805 15 20 per class 30 per class Rest Cora Full 19793 126842 8710 70 20 per class 30 per class Rest |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run the experiments, such as GPU or CPU models. |
| Software Dependencies | No | The paper mentions using "the SGD algorithm" and "cross-entropy loss" but does not specify any software libraries or their version numbers. |
| Experiment Setup | Yes | The adversarial training is conducted with the ℓ -PGD algorithm, which is attacked with perturbation budget ϵx. We choose the cross-entropy loss and the SGD algorithm for training, where the learning rate η is set as 0.1 with a momentum of 0.9. The regularization coefficient λ is fixed to 0.01. |