Soft Reasoning Paths for Knowledge Graph Completion

Authors: Yanning Hou, Sihang Zhou, Ke Liang, Lingyuan Meng, Xiaoshu Chen, Ke Xu, Siwei Wang, Xinwang Liu, Jian Huang

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we evaluate the overall performance of SRPKGC and the effectiveness of its individual modules. The experiments aim to answer the following four research questions: RQ1. How does the proposed SRP-KGC perform compared to the state-of-the-art methods under both transductive and inductive settings? (see Section 4.2) RQ2. Will the introduction of soft paths improve the discriminability of the reasoning path embedding? (see Section 4.3) RQ3. How does the soft reasoning path perform when reasoning paths are missing or present? (see Section 4.4) RQ4. How does hierarchical ranking work? Is it effective? (see Section 4.5) Table 1: Main results on WN18RR,FB15k-237 and Wikidata5M-Trans datasets.
Researcher Affiliation Academia 1College of Intelligence Science and Technology, National University of Defense Technology, Changsha, China 2 College of Computer Science and Technology, National University of Defense Technology, Changsha, China 3School of Artifcial Intelligence, Anhui University, Hefei, China
Pseudocode No The paper describes its methodology in Section 3, titled "Network Framework Based on Contrastive Learning" and its subsections. This includes descriptive text, mathematical formulas, and diagrams (Figure 1), but no explicitly labeled pseudocode or algorithm blocks are provided.
Open Source Code No Our code will be released at https://github.com/7HHHHH/SRP-KGC.
Open Datasets Yes We evaluated our method on three commonly used datasets: WN18RR, FB15k-237, and Wikidata5M-Trans. Detailed information about these datasets is shown in Table 2. Dataset # Ent # Rel # train # valid # test WN18RR 40,943 11 86,835 3,034 3,134 FB15k-237 14,541 237 272,115 17,535 20,466 Wikidata5M-Trans 4,594,485 822 20,614,279 5,163 5,163 Table 2: Statistics of the datasets.
Dataset Splits Yes Dataset # Ent # Rel # train # valid # test WN18RR 40,943 11 86,835 3,034 3,134 FB15k-237 14,541 237 272,115 17,535 20,466 Wikidata5M-Trans 4,594,485 822 20,614,279 5,163 5,163 Table 2: Statistics of the datasets.
Hardware Specification Yes All experiments ran on 4 NVIDIA RTX 4090 24GB GPUs.
Software Dependencies No Our implementation was built using Py Torch. Hyperparameters wi were optimized via grid search over the set {0.2, 0.4, 0.6, 0.8, 1}. All experiments ran on 4 NVIDIA RTX 4090 24GB GPUs. We adopted the text-based model Sim KGC [Wang et al., 2022a] as our baseline, retaining the BERT parameter settings from the original paper. The paper mentions PyTorch and BERT but does not provide specific version numbers for either.
Experiment Setup Yes Our implementation was built using Py Torch. Hyperparameters wi were optimized via grid search over the set {0.2, 0.4, 0.6, 0.8, 1}. All experiments ran on 4 NVIDIA RTX 4090 24GB GPUs. ...At the same time, we retain the temperature parameter τ to balance the importance between the samples. ...Lall = w1Lhr t + w2Lhp t + w3Lhrs t + w4Lhrs p (12) Where wi is tunable hyper-parameters for adapting to specific knowledge graph characteristics. The detailed hyperparameters can be found in the appendix.