Knowledge Graph Completion with Relation-Aware Anchor Enhancement

Authors: Duanyang Yuan, Sihang Zhou, Xiaoshu Chen, Dong Wang, Ke Liang, Xinwang Liu, Jian Huang

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The results of our extensive experiments not only validate the efficacy of RAA-KGC but also reveal that by integrating our relation-aware anchor enhancement strategy, the performance of current leading methods can be notably enhanced without substantial modifications.
Researcher Affiliation Academia 1College of Intelligence Science and Technology, National University of Defense Technology, Changsha, China 2College of Computer Science and Technology, National University of Defense Technology, Changsha, China EMAIL
Pseudocode No The paper describes the method using definitions, equations, and textual explanations, but does not include any explicit pseudocode blocks or algorithms formatted as such.
Open Source Code Yes Code https://github.com/Dayana Yuan/RAA-KGC
Open Datasets Yes We evaluate RAA-KGC on three commonly used datasets: WN18RR (Dettmers et al. 2018), FB15k-237 (Toutanova and Chen 2015), Wikidata5M-Trans (Wang et al. 2021b).
Dataset Splits Yes Dataset train valid test WN18RR 86,835 3034 3134 FB15k-237 272,115 17,535 20,466 Wikidata5M-Trans 20,614,279 5,163 5,163
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments.
Software Dependencies Yes Following the current state-of-the-art KGC methods (Wang et al. 2022a), we use two encoders, g1( ) and g2( ), both initialized with the bert-base-uncased model but do not share parameters. ... We implement RAA-KGC based on the Py Torch library (Paszke et al. 2019).
Experiment Setup Yes The batch size is 32. As for the specific hyperparameters used in our work, we search the upper bound of the trade-off weight α for contrastive loss within the range {0.1, 0.2, 0.3, 0.4, 0.5}.