Replacing Paths with Connection-Biased Attention for Knowledge Graph Completion

Authors: Sharmishtha Dutta, Alex Gittens, Mohammed J. Zaki, Charu C. Aggarwal

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Evaluations on standard inductive KG completion benchmark datasets demonstrate that our Connection Biased Link Prediction (CBLi P) model has superior performance to models that do not use path information. Compared to models that utilize path information, CBLi P shows competitive or superior performance while being faster. Additionally, to show that the effectiveness of connection-biased attention and entity role embeddings also holds in the transductive setting, we compare CBLi P s performance on the relation prediction task in the transductive setting. Experimental Evaluation In this section, we evaluate our model on the entity prediction task in the inductive setting and present the performances in three KG datasets (12 versions). Additionally, we present relation prediction results in the transductive setting.
Researcher Affiliation Collaboration Sharmishtha Dutta1, Alex Gittens1, Mohammed J. Zaki1, Charu C. Aggarwal2 1Department of Computer Science, Rensselaer Polytechnic Institute, Troy, NY, USA 2IBM T. J. Watson Research Center, Yorktown Heights, NY, USA EMAIL, EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes the model architecture, components, and mathematical formulations, but it does not contain any clearly labeled pseudocode or algorithm blocks. For example, Figure 3 illustrates the 'Connection-biased attention computation' but does not present it in a pseudocode format.
Open Source Code No The paper does not provide an explicit statement about the release of open-source code for the methodology described, nor does it include a direct link to a code repository. While it mentions 'Our supplementary material (Dutta et al. 2024)', this citation refers to the arXiv preprint of the paper itself and not specific code.
Open Datasets Yes Datasets (Teru, Denis, and Hamilton 2020) extracted 12 inductive datasets from three popular KG benchmark datasets Wordnet, Freebase, and Nell. We present the dataset statistics in supplementary material (Dutta et al. 2024).
Dataset Splits No The paper states that '(Teru, Denis, and Hamilton 2020) extracted 12 inductive datasets from three popular KG benchmark datasets Wordnet, Freebase, and Nell' and mentions 'We present the dataset statistics in supplementary material (Dutta et al. 2024)'. However, the main text of the paper does not explicitly provide specific training/validation/test dataset split percentages or absolute sample counts for these datasets.
Hardware Specification Yes All experiments were conducted on Quadro RTX 6000 (with NVLink) GPU with 32 GB memory.
Software Dependencies No The paper does not provide specific software dependency details with version numbers (e.g., programming language version, library versions) that would be needed to replicate the experiment.
Experiment Setup No The paper mentions 'The hyperparameter selection is described in the supplementary material (Dutta et al. 2024)', indicating that specific experimental setup details like hyperparameter values are not present in the main text.