Higher-order Logical Knowledge Representation Learning

Authors: Suixue Wang, Weiliang Huo, Shilin Zhang, Qingchen Zhang

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments have been conducted on seven real-world datasets for two downstream tasks (i.e., entity classification and link prediction). The results show that LORE outperforms baselines significantly and consistently.
Researcher Affiliation Academia 1 School of Information and Communication Engineering, Hainan University 2 College of Intelligence and Computing, Tianjin University 3 School of Computer Science and Technology, Hainan University EMAIL, zhang shilin EMAIL
Pseudocode No No explicit pseudocode or algorithm blocks are provided. The paper describes the methodology using mathematical formulas such as Hl+1 d = ˆAHl d Wd (1) and hl+1 i = σ( X 1 |N r i |W(l) r hl j) (3).
Open Source Code No The paper does not provide any statement regarding the availability of open-source code, nor does it include a link to a code repository.
Open Datasets Yes We evaluate our model on four commonly used datasets for entity classification: AIFB, MUTAG, BGS, and PPI. Meanwhile, for link prediction, we evaluate our model on three datasets commonly used: FB15k-237, WN18, and WN18RR.
Dataset Splits No The paper mentions evaluating models on various datasets (AIFB, MUTAG, BGS, PPI, FB15k-237, WN18, WN18RR) for entity classification and link prediction tasks. However, it does not explicitly provide specific details regarding training, validation, and test dataset splits, such as percentages, sample counts, or the methodology used for splitting.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details, such as library names with version numbers, needed to replicate the experiment.
Experiment Setup No The paper describes the overall framework of LORE, including higher-order logical relational features formulation, feature representation using GCNs, and feature aggregation. It mentions a threshold for enhancing entity motif degree matrices and a hyperparameter B for basis decomposition, but specific numerical values for these, along with common training hyperparameters like learning rate, batch size, number of epochs, or optimizer details, are not provided.