Explanations of GNN on Evolving Graphs via Axiomatic Layer edges

Authors: Yazheng Liu, Sihong Xie

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on eight datasets for node classification, link prediction, and graph classification tasks with evolving graphs demonstrate the better fidelity and interpretability of the proposed method over the baseline methods. The code is available at https://github.com/yazhengliu/Axiomatic-Layer-Edges/tree/main.
Researcher Affiliation Academia Yazheng Liu, Sihong Xie The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China EMAIL, EMAIL
Pseudocode Yes Algorithm 1 Selecting important layer edges to explain evolution of Pr(Y |G0) to Pr(Y |G1) on the node classification task Algorithm 2 Selecting important layer edges to explain evolution of Pr(Y |G0) to Pr(Y |G1) on the link prediction task Algorithm 3 Selecting important layer edges to explain evolution of Pr(Y |G0) to Pr(Y |G1) on the graph classification tasks Algorithm 4 Selecting important input edges to explain evolution of Pr(Y |G0) to Pr(Y |G1) on the node classification and link prediction tasks Algorithm 5 Selecting the important input edges to explain evolution of Pr(Y |G0) to Pr(Y |G1) on the graph classification tasks
Open Source Code Yes The code is available at https://github.com/yazhengliu/Axiomatic-Layer-Edges/tree/main.
Open Datasets Yes We study node classification task on the Yelp Chi, Yelp NYC (Rayana & Akoglu, 2015), Pheme (Zubiaga et al., 2017) and Weibo (Ma et al., 2018) datasets. We explore the link prediction tasks on the BC-OTC, BC-Alpha, and UCI datasets. We study the graph classification tasks on MUTAG (Debnath et al., 1991), Clin Tox (Gayvert et al., 2016), IMDB-BINARY and REDDIT-BINARY datasets. The details of data are in Table 2. 1http://snap.stanford.edu/data/soc-sign-bitcoin-otc.html 2http://snap.stanford.edu/data/soc-sign-bitcoin-alpha.html 3http://konect.cc/networks/opsahl-ucsocial
Dataset Splits No For each dataset, we optimize the GNN parameter θ on the training set of static graphs, using labeled nodes, edges, or graphs based on the specific tasks. The paper does not provide specific details on how the training, validation, or test sets were split (e.g., percentages, exact counts, or predefined splits with citations).
Hardware Specification No The paper mentions evaluating running time on datasets like Pubmed, Coauthor-Computer, and Coauthor-Physics (Appendix A.7.7) but does not provide any specific details about the hardware used for these experiments or for the main experimental results.
Software Dependencies No The optimization problems in Eq. (9), Eq. (15) and Eq. (16) are solved using the cvxpy library (Diamond & Boyd). No version information is provided for the cvxpy library or any other software dependencies, such as the GNN framework used.
Experiment Setup Yes During training, we set the learning rate to 0.01, the dropout rate to 0.2 and the hidden size to 16. The model is trained and then fixed during the prediction and explanation stages.