Rationalizing and Augmenting Dynamic Graph Neural Networks
Authors: Guibin Zhang, Yiyan Qi, Ziyang Cheng, Yanwei Yue, Dawei Cheng, Jian Guo
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on six benchmarks and three GNN backbones demonstrate that Dy Aug can (I) improve the performance of dynamic GNNs by 0.89% 3.13% ; (II) effectively counter targeted and non-targeted adversarial attacks with 6.2% 12.2% performance boost; (III) make stable predictions under temporal distribution shifts. The source code is available at https://github.com/bingreeky/Dy Aug. |
| Researcher Affiliation | Collaboration | Guibin Zhang1,2, , Yiyan Qi2, , Ziyang Cheng1, Yanwei Yue1, Dawei Cheng1 , Jian Guo2 1Tongji University 2International Digital Economy Academy (IDEA) Corresponding: EMAIL, EMAIL |
| Pseudocode | No | The paper describes methods using mathematical equations and structured text, but it does not include a distinct section labeled 'Pseudocode' or 'Algorithm', nor does it present the steps in a formal, code-like algorithm block. |
| Open Source Code | Yes | The source code is available at https://github.com/bingreeky/Dy Aug. |
| Open Datasets | Yes | To thoroughly evaluate our proposed method, we select five real-world datasets. COLLAB (Tang et al., 2012) is an academic collaboration network spanning 16 years. Yelp (Sankar et al., 2020) is a business review dataset containing customer feedback on various businesses. Bitcoin (Kumar et al., 2018) is a trust network dataset representing users who engage in trading on the Bitcoin OTC platform. UCI (Panzarasa et al., 2009) is an online communication network from the University of California, Irvine, capturing student interactions. Lastly, ACT (Kumar et al., 2019) describes the actions taken by users on a popular MOOC website within 30 days. |
| Dataset Splits | Yes | To evaluate whether Dy Aug can defend against distribution shifts in dynamic graphs, we use COLLAB and Yelp, and explicitly construct distribution shifts. Specifically, for COLLAB, we transfer all edges belonging to data mining category to the test set, ensuring that Dy GNN has never been exposed to this category during training. For Yelp, we select Pizza edges as the out-of-distribution data. |
| Hardware Specification | Yes | The results are reported on a single NVIDIA Tesla A100 40G GPU. |
| Software Dependencies | No | The paper mentions using GNN backbones like GCRN, Dy SAT, and SEIGN, and references an implementation of Dy SAT from 'Amazon-TGL (https://github.com/amazon-science/tgl)'. However, it does not specify version numbers for any programming languages, libraries, or frameworks used (e.g., Python, PyTorch, TensorFlow, CUDA versions). |
| Experiment Setup | Yes | Hyperparameter Configurations We set the number of layers to two for all baselines, with a hidden dimension of 128. Specifically, for Dy Aug, we fix τ = 1e 2 and ϖ = 2 across all datasets. For each node, we assign an equal probability of selecting one of the following strategies: (1) no replacement, using only the rationale embedding; (2) spatial replacement; (3) temporal replacement; or (4) spatial-temporal replacement. We provide an ablation study on the effectiveness of these replacement strategies in Appendix B.2. For the parameters α1 and α2 in Equation (13), we vary α1 {1e 2, 5e 2, 1e 1} and α2 {1e 4, 5e 4, 1e 3, 5e 3}. |