Towards Generalization Bounds of GCNs for Adversarially Robust Node Classification
Authors: Wen Wen, Han Li, Tieliang Gong, Hong Chen
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on benchmark datasets validate our theoretical findings. In this section, we evaluate the impact of some key quantities on the generalization performance of GNC in adversarial settings, such as feature dimension, regularizer, graph filters, the number of layers, etc. Extensive experimental results validate our theoretical findings in Sections 4&5. |
| Researcher Affiliation | Academia | 1College of Informatics, Huazhong Agricultural University 2Engineering Research Center of Intelligent Technology for Agriculture, Ministry of Education 3Shenzhen Institute of Nutrition and Health, Huazhong Agricultural University 4School of Computer Science and Technology, Xi an Jiaotong University EMAIL, EMAIL |
| Pseudocode | No | The paper describes methods and proofs mathematically and textually, but does not include any explicitly labeled pseudocode or algorithm blocks with structured formatting. |
| Open Source Code | No | The paper does not provide any statement regarding the release of source code for the methodology described, nor does it include links to any code repositories. |
| Open Datasets | Yes | We adopt several widely-used benchmark datasets, including Citeseer, Cora, Pubmed, CS, Physics, and ogbn-arxiv (Sen et al., 2008; Yang et al., 2016; Hu et al., 2020). Statistics of the datasets are summarized in Table 1. |
| Dataset Splits | Yes | Statistics of the datasets are summarized in Table 1. Dataset Classes Nodes Edges Features Training Validation Test Citeseer 6 3,327 4,732 3,703 20 per class 500 1000 Cora 7 2,708 5,429 1,433 20 per class 500 1000 Pubmed 3 19,717 44,338 500 20 per class 500 1000 CS 15 18,333 81,894 6,805 20 per class 30 per class Rest Physics 5 34,493 247,962 8,415 20 per class 30 per class Rest ogbn-arxiv 40 169,343 1,166,243 128 20 per class 30 per class Rest |
| Hardware Specification | Yes | The implement is Ge Force RTX 3080 GPU. |
| Software Dependencies | No | The Adam optimizer (Kingma & Ba, 2015) with the learning rate 0.01 is used in the training process. This mentions an optimizer, but no specific software library or framework with version numbers (e.g., PyTorch, TensorFlow, or Python versions) is provided. |
| Experiment Setup | Yes | Unless otherwise specified, we apply a two-layer network architecture for GCN, SGC, GCNII, and Residual GCN (Kipf & Welling, 2017; Wu et al., 2019; Chen et al., 2020b), where the number of hidden units for each layer is fixed to 16 or 64. For GCNII, the parameter α is set by default to 0.5, β is set to log( θ l + 1), where θ = 0.1 and l is the number of layers. We use Re LU function as activation function. The Adam optimizer (Kingma & Ba, 2015) with the learning rate 0.01 is used in the training process. The training iterations is fixed to 600. During training and testing, the adversarial nodes are generated by the ℓ -PGD algorithm (Madry et al., 2018) with the step size ε/128, where adversarial perturbations are added to test nodes after training to avoid a biased evaluation through memorization of the transductive learning setting (Gosch et al., 2024). |