Computing Repairs of Inconsistent DL-Programs over EL Ontologies
Authors: Thomas Eiter, Michael Fink, Daria Stepanova
JAIR 2016 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We present a declarative implementation of the repair approach and experimentally evaluate it on a set of benchmark problems; the promising results witness practical feasibility of our repair approach. Our approach was evaluated on a multi-core Linux server running DLVHEX 2.4.0 under the HTCondor load distribution system (HTCondor, 2012), which is a specialized workload management system for compute-intensive tasks, using two cores (AMD 6176 SE CPUs) and 8GB RAM. To the best of our knowledge, no similar system for repairing inconsistent DL-programs exists. The list of systems for evaluating DL-programs includes the DREW system (DRe W, 2012; Xiao, 2014) and the dlplugin of the DLVHEX system (dlplugin, 2007). The DREW system exploits datalog rewritings for evaluating DL-programs over EL ontologies; however, it can not handle inconsistencies, which are the focus of our work. Thus DREW per se could not be used as a baseline for experiments. |
| Researcher Affiliation | Academia | Thomas Eiter EMAIL Michael Fink EMAIL Daria Stepanova EMAIL Institut für Informationssysteme, TU Wien, Favoritenstraße 9-11, 1040 Vienna, Austria |
| Pseudocode | Yes | Algorithm 1: Part Sup Fam: compute partial support family. Algorithm 2: Sound RAns Set: compute deletion repair answer sets. |
| Open Source Code | Yes | We have implemented our repair approach in C++ in a system prototype (dlliteplugin of the DLVHEX system, 2015). The software is freely online available (dlliteplugin, 2015). https://github.com/hexhex/dlliteplugin. |
| Open Datasets | Yes | The Open Street Map benchmark contains a set of rules over the ontology for enhanced personalized route planning with semantic information (My ITS ontology, 2012) extended by an ABox containing data from the Open Street Map project (OSM, 2012). The LUBM benchmark comprises rules on top of the well-known LUBM ontology (LUBM, 2005) in EL. Experimental data with inconsistent DL-programs (2015). http://www.kr.tuwien.ac.at/staff/dasha/jair_el/benchmark_instances.zip. |
| Dataset Splits | No | The paper describes data generation parameters and probabilities for constructing instances, such as '20% of the staff members are unauthorized and 40% are blacklisted' or 'randomly made 80% of the bus stops roofed and 60% of leisure areas private', but it does not specify explicit training, validation, or test splits for experimental evaluation. |
| Hardware Specification | Yes | Our approach was evaluated on a multi-core Linux server running DLVHEX 2.4.0 under the HTCondor load distribution system (HTCondor, 2012), which is a specialized workload management system for compute-intensive tasks, using two cores (AMD 6176 SE CPUs) and 8GB RAM. |
| Software Dependencies | Yes | Our approach was evaluated on a multi-core Linux server running DLVHEX 2.4.0 under the HTCondor load distribution system (HTCondor, 2012)... HTCondor load distribution system, version 7.8.7 (2012). |
| Experiment Setup | Yes | In each run, we measured the time for computing the first repair answer set, including support set computation, with a timeout of 300 seconds. For generating instances, we used the probability p/100 (with p from column 1) that a fact hasowner(pi, si) is added to the rules part P for each si, pi, such that Staff(si), Project(pi) A (i.e., instances vary only on facts hasowner(pi, si) in P.) as a parameter. In the restricted configurations, the column size = n (resp. num = n) states that in the computed partial support families the size (resp. number) of support sets is at most n; if n = , then in fact all support sets were computed. |