Rule-Guided Graph Neural Networks for Explainable Knowledge Graph Reasoning

Authors: Zhe Wang, Suxue Ma, Kewen Wang, Zhiqiang Zhuang

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental evaluation of knowledge graph reasoning tasks further demonstrates the effectiveness of our model. We have evaluated our approach on inductive link prediction tasks. We have also analysed the number and the quality of extracted rules. The experiments are designed to validate the following statements: Our GNN exhibits better performance compared to explainable methods in inductive link prediction over common benchmarks (ref. Table 3). Our rule-guidance approach is effective, evidenced by the quality (measured by standard confidence) of the extracted rules from our GNN (ref. Figure 4). Our rule encoding method is effective, shown by the high correlation between the encoded and decoded rules (ref. Figure 5).
Researcher Affiliation Academia 1Griffith University 2Tianjin University EMAIL, EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes methods and formulas but does not include a clearly labeled 'Pseudocode' or 'Algorithm' block/figure.
Open Source Code Yes All experiments were performed on an Intel(R) Xeon(R) machine with 2 NVIDIA Ge Force RTX 4090 GPUs. The implementation of our approach can be found at https://github.com/bohemianc/rule-guided-gnns.
Open Datasets Yes We adopt commonly used benchmarks for inductive link prediction for our evaluation (Teru, Denis, and Hamilton 2020; Liu et al. 2021), including 8 datasets constructed from FB15K-237 (Bordes et al. 2013) and NELL-995 (Xiong, Hoang, and Wang 2017). There are 4 datasets (called versions) for each of these benchmarks, constructed by completely separating the entities in the test set from those in the training set. As our model can encode and process type information, we used the datasets with triples about entity types, which are proposed in (Ma et al. 2023). We also include the benchmark INDIGO-BM (Liu et al. 2021), which contains triples about types.
Dataset Splits Yes There are 4 datasets (called versions) for each of these benchmarks, constructed by completely separating the entities in the test set from those in the training set.
Hardware Specification Yes All experiments were performed on an Intel(R) Xeon(R) machine with 2 NVIDIA Ge Force RTX 4090 GPUs.
Software Dependencies No The paper mentions hardware and code availability, but does not specify software dependencies with version numbers (e.g., Python, PyTorch, CUDA versions).
Experiment Setup No The paper describes the overall methodology and evaluation metrics but does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs) or detailed training configurations within the main text.