Hierarchy Knowledge Graph for Parameter-Efficient Entity Embedding

Authors: Hepeng Gao, Funing Yang, Yongjian Yang, Ying Wang

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate HRL on the knowledge graph completion task using three real-world datasets. The results demonstrate that HRL significantly outperforms existing parameter-efficient baselines, as well as traditional state-of-the-art baselines of similar scale. (...) 5 Experiments In this section, we perform KG completion tasks to train and evaluate our model. (...) 5.4 Performance Results (RQ1) (...) 5.5 Ablation Study (RQ2) (...) 5.6 Case Study (RQ3) (...) 5.7 Parameter Study (RQ4)
Researcher Affiliation Academia Hepeng Gao , Funing Yang , Yongjian Yang and Ying Wang Jilin University EMAIL, EMAIL
Pseudocode No The paper describes methodologies using mathematical formulations and textual descriptions but does not contain explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statements about releasing source code or provide a link to a code repository.
Open Datasets Yes We evaluate our model on three real-world knowledge graph datasets: FB15k-237 [Toutanova et al., 2015], WN18RR [Dettmers et al., 2018], and Co DEx-L [Safavi and Koutra, 2020]. The datasets are well-established KGs commonly used in the field. Detailed statistics and descriptions of these datasets are provided in Table 2.
Dataset Splits Yes Table 2: The statistics of datasets. Dataset #Ent #Rel #Train #Valid #Test FB15k-237 14,505 237 272,115 17,526 20,438 WN18RR 40,559 11 86,835 2,824 2,924 Co DEx-L 77,951 69 551,193 30,622 30,622
Hardware Specification No The paper mentions general concepts like "significant hardware resource consumption" in the context of traditional KGs but does not provide specific details about the hardware used for their experiments.
Software Dependencies No We employ the Adam [Diederik, 2014] optimizer with a learning rate of 1e 3 and a weight decay of 5e 5. (...) The paper mentions the Adam optimizer but does not specify any software libraries or frameworks (e.g., PyTorch, TensorFlow) with version numbers used for implementation.
Experiment Setup Yes For the Meta Encoder, the memory bank dimension is searched within the range {128, 256, 512}. For the Context Encoder, the number of layers is set to 2 for FB15k237 and Co DEx-L, and 3 for WN18RR. We employ the Adam [Diederik, 2014] optimizer with a learning rate of 1e 3 and a weight decay of 5e 5. The batch size and the number of negative samples are set to 1024 and 256, respectively. Due to the Rotat E s training strategy, the margin γ is chosen from {8, 9, 10, 11, 12}. The temperature factor α is set to 1.