Adversarial Contrastive Graph Masked AutoEncoder Against Graph Structure and Feature Dual Attacks
Authors: Weixuan Shen, Xiaobo Shen, Shirui Pan
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on node classification and clustering tasks demonstrate the effectiveness of the proposed ACGMAE, especially under graph structure and feature dual attacks. |
| Researcher Affiliation | Academia | 1Nanjing University of Science and Technology, Nanjing, China 2Griffith University, Gold Coast, Australia |
| Pseudocode | Yes | Algorithm 1: Algorithm of ACGMAE |
| Open Source Code | No | The paper does not contain an explicit statement about releasing the source code for the methodology described, nor does it provide a link to a code repository. |
| Open Datasets | Yes | We evaluate ACGMAE and baselines on three benchmark datasets, i.e., Cora, Citeseer, Pubmed (Jin et al. 2020) |
| Dataset Splits | Yes | For node classification, we randomly select 10% nodes for training, 10% nodes for validation, and the remaining for testing. |
| Hardware Specification | Yes | The experiments are performed on a Ubuntu Enterprise 64Bit Linux workstation with 128G memory and a NVIDIA A6000 GPU server. |
| Software Dependencies | No | The paper mentions that a two-layer GCN is employed as the encoder, but it does not specify any software names with version numbers (e.g., Python, PyTorch, TensorFlow, or specific libraries). |
| Experiment Setup | Yes | In the proposed ACGMAE, the learning rate and weight decay are searched from {0.01, 0.001, 0.0001} and {0.0001, 0.0005, 0.0001, 0.00005} respectively. The perturbation ratio X is searched from {0.1, 0.3, 0.5, 0.7, 0.9}, and the number of nearest neighbors and the number of clusters are searched from {10, 15, 20, 25, 30}. The coefficients α, β, and γ are searched from {0.01, 0.1, 0.5, 1, 3, 5}. |