Towards Global-Topology Relation Graph for Inductive Knowledge Graph Completion

Authors: Ling Ding, Lei Huang, Zhizhi Yu, Di Jin, Dongxiao He

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct KGC experiments on six inductive datasets using inference data where entities are entirely new and new relations at 100 percent, 50 percent, and 0 percent radios. Extensive results demonstrate that our model accurately learns the topological structures and embeddings of new relations, and guides the embedding learning of new entities. Notably, our model outperforms 15 SOTA methods, especially in two fully inductive datasets.
Researcher Affiliation Academia Ling Ding, Lei Huang, Zhizhi Yu*, Di Jin, Dongxiao He College of Intelligence and Computing, Tianjin University, Tianjin, China. EMAIL
Pseudocode No The paper describes the methodology in the "Methodology" section, using detailed textual descriptions and mathematical equations (e.g., equations 1-10) to explain the processes of global-topological relation graph construction, relation embedding learning, and entity embedding learning. However, it does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statement about making the source code available, nor does it provide a link to a code repository.
Open Datasets Yes We conduct KGC link prediction experiments on two commonly used public datasets: NELL995 (Xiong, Hoang, and Wang 2017) and FB15K-237 (Toutanova and Chen 2015).
Dataset Splits Yes For the edges in the inference graph Einf, we divide Einf into three disjoint sets: Einf := Finf Tval Ttest in a ratio of 3:1:1.
Hardware Specification No The paper does not provide specific hardware details (such as GPU or CPU models, memory, or specific computer specifications) used for running the experiments.
Software Dependencies No We choose Adam optimizer (Kingma and Ba 2015) and the default initialization in pytorch.
Experiment Setup Yes To ensure a fair comparison, we set the dimensions of our method and all baseline methods at d = 32 and ˆd = 32. Each experiment is run five times, and the average results are reported for robust comparison. We use the common Glorot initialization (Glorot and Bengio 2010) to initialize the initial features of relations and entities and we choose Adam optimizer (Kingma and Ba 2015) and the default initialization in pytorch.