Grounding Methods for Neural-Symbolic AI

Authors: Rodrigo Castellano Ontiveros, Francesco Giannini, Marco Gori, Giuseppe Marra, Michelangelo Diligenti

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Several experiments have been performed on KG link prediction to compare the effectiveness of the grounders.1 Datasets. The Countries [Bouchard et al., 2015], Kinship [Kok and Domingos, 2007], WN18RR [Dettmers et al., 2018] and FB15k-237 [Dou et al., 2021] datasets are used to capture diverse sizes and complexities. ... Metrics. For the evaluation, we use Mean Reciprocal Rank (MRR) and Hits@N metrics. All metric evaluations have been averaged over five runs for the countries dataset. 5.1 Experimental Results
Researcher Affiliation Academia 1University of Siena, Italy 2Scuola Normale Superiore, Italy 3KU Leuven, Belgium EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes methods and equations for message passing and grounding but does not provide structured pseudocode or algorithm blocks. For example, it describes the Parameterized Backward Chaining Grounder in text without a formal algorithm block.
Open Source Code Yes 1https://github.com/rodrigo-castellano/Grounding-Methods
Open Datasets Yes The Countries [Bouchard et al., 2015], Kinship [Kok and Domingos, 2007], WN18RR [Dettmers et al., 2018] and FB15k-237 [Dou et al., 2021] datasets are used to capture diverse sizes and complexities.
Dataset Splits Yes The Countries [Bouchard et al., 2015]...Countries is split into three tasks S{1,2,3} of increasing complexity, predicting country location based on regions and neighborhoods. 5.1 Experimental Results The first experiment serves as a proof of concept using the Countries_abl dataset. Three splits (AS1, AS2, AS3) are generated by ablating facts from the original dataset. The splits are designed with queries requiring one, two, and three reasoning steps, respectively, based on consecutive applications of the rule Loc In(x, w) Loc In(w, z) Loc In(x, z).
Hardware Specification Yes Results for the WN18RR and Kinship are presented in Table 2, where we report only the grounders that could be applied in less than 10h of computation on our workstation (12 core i7 CPU, 64GB RAM, OS Linux).
Software Dependencies No The paper mentions "Adam optimizer" but does not specify any software versions for libraries, frameworks, or programming languages used.
Experiment Setup Yes Hyperparameters. The KGE uses a fixed embedding size of 100, with overall 1000 head and tail corruptions for WN18RR. Adam optimizer (10-2 learning rate) and binary cross-entropy loss were used over 100 epochs.