Adversarial Contrastive Graph Augmentation with Counterfactual Regularization
Authors: Tao Long, Lei Zhang, Liang Zhang, Laizhong Cui
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our ACGA method through extensive experiments on representative benchmark datasets, and the results demonstrate that ACGA outperforms state-of-the-art baselines. |
| Researcher Affiliation | Academia | Tao Long1, Lei Zhang1, Liang Zhang2 *, Laizhong Cui1 1College of Computer Science and Software Engineering, Shenzhen University , Shenzhen, Guangdong, China 2Shenzhen Research Institute of Big Data, Shenzhen, Guangdong, China EMAIL, EMAIL, EMAIL, |
| Pseudocode | No | The paper describes methods in text, such as the instantiation of ACGA, but does not present them in structured pseudocode or an algorithm block. |
| Open Source Code | Yes | Code https://github.com/longtao-09/ACGA |
| Open Datasets | Yes | Datasets: We evaluate our method on total seven benchmark datasets across domains: citation networks (CORA, CITESEER, PUBMED (Sen et al. 2008)), social networks (BLOGCATALOG, FLICKR (Huang, Li, and Hu 2017)), and air traffic (AIR USA (Wu et al. 2020b)), co-authorship networks(Coauthor-CS (Shchur et al. 2018)). |
| Dataset Splits | Yes | Large datasets like Pubmed, Coauthor-CS lead to OOM (Out of Memory) for most baselines. Thus, we sampled subgraphs of 60% of nodes with graph sizes more than 10k. |
| Hardware Specification | Yes | Our experimental device is an NVIDIA Tesla T4 16GB graphics card. |
| Software Dependencies | No | The paper mentions using Adam optimizer and GCN networks but does not provide specific version numbers for software libraries or frameworks like Python, PyTorch, or TensorFlow. |
| Experiment Setup | Yes | For all baselines and our models, the hidden layer dimension is set to 128 for node classification tasks and 32 for link prediction tasks, and the rest of the parameters remain as the original parameters of Git Hub. We use Adam optimizer with learning rate set to 0.01. |