Relational Neurosymbolic Markov Models
Authors: Lennert De Smet, Gabriele Venturato, Luc De Raedt, Giuseppe Marra
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We present here our generative and discriminative benchmarks, and show that Ne Sy-MMs are capable of tackling both settings (D.IV). We also clearly show how the presence of relational logic in Ne Sy-MMs significantly and positively impacts both inand out-of-distribution performance compared to state-of-the-art deep (probabilistic) models (D.I). In doing so, we show that Ne Sy-MMs scale to problem settings far beyond the horizon of existing Ne Sy methods (D.II). In total, Ne Sy-MMs are successful neurosymbolic models capable of optimising various neural components while adhering to logical constraints (D.III). |
| Researcher Affiliation | Academia | 1KU Leuven, Belgium 2Örebro University, Sweden EMAIL |
| Pseudocode | Yes | Algorithm 1 Logic programming encoding of Example 2.1. Algorithm 2 Logic programming encoding of Example 3.1. |
| Open Source Code | Yes | Code https://github.com/ML-KULeuven/nesy-mm |
| Open Datasets | No | Our generative experiment is inspired by the Mario experiment of Misino, Marra, and Sansone (2022), extended using Mini Hack (Samvelyan et al. 2021), a flexible framework to define environments of the open-ended game Net Hack (Küttler et al. 2020). The dataset consists of trajectories of images of length T representing an agent moving T steps in a grid world of size N N surrounded by walls. |
| Dataset Splits | Yes | To specifically gauge the out-of-distribution (OOD) generalisation capabilities of all methods, we train only using simple sequences of length 10 containing just one enemy moving on a 10 10 grid and we test on more complex sequences. The OOD cases consider different combinations of sequences on grids of size 10 10 or 15 15, length 10 or 20, and with 1 or 2 enemies. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names with version numbers like Python 3.8, CPLEX 12.4) needed to replicate the experiment. |
| Experiment Setup | No | The paper describes the general setup of the generative and discriminative experiments, including the baselines used and metrics. However, it does not provide specific experimental setup details such as concrete hyperparameter values (e.g., learning rate, batch size, number of epochs, optimizer settings) or detailed model architecture configurations. |