Goal-Driven Reasoning in DatalogMTL with Magic Sets

Authors: Shaoyu Wang, Kaiyue Zhao, Dongliang Wei, Przemysław Andrzej Wałęga, Dingmin Wang, Hongming Cai, Pan Hu

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We have implemented this approach and evaluated it on publicly available benchmarks, showing that the proposed approach significantly and consistently outperformed state-of-the-art reasoning techniques. ... Empirical Evaluation ... Comparison With Baseline. We compared the performance of our approach to that of the reasoning with original programs on datasets from LUBMt with 106 facts, i Temporal with 105 facts, and ten year s worth of meteorological data. ... Scalability Experiments. We conducted scalability experiments on both LUBMt and i Temporal benchmarks as they are equipped with generators allowing for constructing datasets of varying sizes.
Researcher Affiliation Academia 1School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, China 2Department of Computer Science, University of Oxford, UK 3School of Electronic Engineering and Computer Science, Queen Mary University of London, UK
Pseudocode Yes Algorithm 1: Magic Rewriting ... Algorithm 2: Magic Head Atoms
Open Source Code Yes The datasets we used, the source code of our implementation, as well as an extended technical report, are available online.1 1https://github.com/Royal Raisins/Magic-Sets-for-Datalog MTL
Open Datasets Yes We implemented our algorithm and evaluated it on LUBMt (Wang et al. 2022) a temporal version of LUBM (Guo, Pan, and Heflin 2005) , on i Temporal (Bellomarini, Nissl, and Sallinger 2022), and on the meteorological (Maurer et al. 2002) benchmarks.
Dataset Splits No The paper does not provide specific training/test/validation dataset splits. It mentions using varying dataset sizes for scalability experiments and selecting queries, but not how the data is partitioned for general reproducibility of results in the traditional sense of data splits.
Hardware Specification Yes We ran the experiments on a server with 256 GB RAM, Intel Xeon Silver 4210R CPU @ 2.40GHz, Fedora Linux 40, kernel version 6.8.5-301.fc40.x86 64.
Software Dependencies No The paper mentions using the "Me Teo R system (Wałe ga et al. 2023b)" as a Datalog MTL solver, and operating system "Fedora Linux 40, kernel version 6.8.5-301.fc40.x86 64". However, it does not provide specific version numbers for the Me Teo R system or any other key software libraries or dependencies that would be needed for replication.
Experiment Setup No The paper describes the experimental evaluation methodology (e.g., measuring wall-clock time, comparing with baseline), but it does not provide specific experimental setup details such as hyperparameters, optimizer settings, or other system-level training configurations for the Datalog MTL reasoning process.