DGExplainer: Explaining Dynamic Graph Neural Networks via Relevance Back-propagation

Authors: Yezi Liu, Jiaxuan Xie, Yanning Shen

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Quantitative and qualitative experiments on six real-world datasets demonstrate that DGExplainer effectively identifies critical nodes for link prediction and node regression tasks in dynamic GNNs.
Researcher Affiliation Academia University of California, Irvine EMAIL, EMAIL
Pseudocode No No explicit pseudocode or algorithm block (e.g., "Algorithm 1") is present in the provided text, although it is referenced.
Open Source Code Yes Appendix available at https://github.com/yezil3/DGExplainer IJCAI/blob/main/IJCAI appendix.pdf
Open Datasets Yes Datasets. We evaluate the proposed framework on six realworld datasets. For the link prediction tasks, we use four datasets: Reddit Hyperlink (Reddit) [Kumar et al., 2018], Enron [Klimt and Yang, 2004], Facebook (FB) [Trivedi et al., 2019], and COLAB [Rahman and Al Hasan, 2016]. For the node regression tasks, we use two datasets: Pe MS04 and Pe MS08 [Guo et al., 2019]1. The statistics of these datasets and the initial performance of GCN-GRU on them are presented in Appendix A.2. 1pems.dot.ca.gov
Dataset Splits No The paper mentions using datasets and evaluating performance but does not explicitly provide specific training/test/validation dataset splits (e.g., percentages or counts) within the provided text. It refers to 'experimental setup of a previous work [Pareja et al., 2020]' for evaluation but not for data partitioning.
Hardware Specification No The paper does not explicitly describe the specific hardware (e.g., GPU models, CPU types, memory details) used to run its experiments.
Software Dependencies No The paper does not provide specific software dependencies, such as library names with version numbers, used to replicate the experiment.
Experiment Setup No The paper states that 'Implementation details are provided in Appendix A.4' but does not include specific hyperparameter values or detailed training configurations within the main text.