Temporal Conjunctive Query Answering via Rewriting

Authors: Lukas Westhofen, Jean Christoph Jung, Daniel Neider

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We implemented our rewriting approach and evaluated it on large-scale benchmarks for expressive DLs. A comparison to the tool of Westhofen et al. in Section 5 shows improvements on all benchmarks, in some cases by two orders of magnitude. Our evaluation supports the hypothesis that better run times are mainly explained by a more precise use of UCQs.
Researcher Affiliation Collaboration 1German Aerospace Center (DLR e.V.), Institute of Systems Engineering for Future Mobility, Oldenburg, Germany 2TU Dortmund University, Dortmund, Germany 3Research Center Trustworthy Data Science and Security, University Alliance Ruhr, Dortmund, Germany
Pseudocode Yes Algorithm 1 Answer TCQ(K, Φ, i) for K of length n + 1
Open Source Code Yes Code, data, and reproducibility instructions are provided online (Westhofen 2024). https://doi.org/10.5281/zenodo.14412131.
Open Datasets Yes For the TKBs, we rely on the only two existing benchmark sets for temporal querying over expressive DLs known to us, both publicly available and provided by Westhofen et al. (2024a, 2024b).
Dataset Splits No The paper evaluates a query answering system on existing benchmarks, which are temporal sequences of data. It describes scaling factors for the number of individuals and time points within these benchmarks but does not specify how these benchmarks themselves are split into training, validation, or test sets for the purpose of evaluating the proposed method. The experimental setup tests the query answering on the full given benchmarks.
Hardware Specification Yes We ran each system once per benchmark... on a Windows 10 machine with an Intel Core i9-13900K, 64 GB RAM
Software Dependencies No We implemented Algorithm 1 into Openllet, the same reasoner that was used in Westhofen et al. (2024a). No specific version number for Openllet or other software is provided.
Experiment Setup Yes We ran each system once per benchmark (as determinism of both implementations makes deviations negligible) on a Windows 10 machine with an Intel Core i9-13900K, 64 GB RAM, and a ten hours time limit per run.