DL-Lite Contraction and Revision

Authors: Zhiqiang Zhuang, Zhe Wang, Kewen Wang, Guilin Qi

JAIR 2016 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this paper, we deal with contraction and revision for the DL-Lite family through a model-theoretic approach. Standard description logic semantics yields an infinite number of models for DL-Lite knowledge bases, thus it is difficult to develop algorithms for contraction and revision that involve DL models. The key to our approach is the introduction of an alternative semantics called type semantics which can replace the standard semantics in characterising the standard inference tasks of DL-Lite. Type semantics has several advantages over the standard one. It is more succinct and importantly, with a finite signature, the semantics always yields a finite number of models. We then define model-based contraction and revision functions for DL-Lite knowledge bases under type semantics and provide representation theorems for them. Finally, the finiteness and succinctness of type semantics allow us to develop tractable algorithms for instantiating the functions.
Researcher Affiliation Academia Zhiqiang Zhuang EMAIL Institute for Integrated and Intelligent Systems Griffith University, Australia Zhe Wang EMAIL School of Information and Communication Technology Griffith University, Australia Kewen Wang EMAIL School of Information and Communication Technology Griffith University, Australia Guilin Qi EMAIL School of Computer Science and and Engineering Southeast University, China State Key Lab for Novel Software Technology Nanjing University, China
Pseudocode Yes Algorithm 1: TCONT Input: TBox T and conjunction of TBox axiom φ Output: TBox T φ 1 if φ is a tautology or T |= φ then 2 return T φ := T ; 3 Let τ = Pick Counter Model(φ); 4 foreach ψ T do 5 if τ |=t r ψ then 6 T := T {ψ}; 7 return T φ := T ;
Open Source Code No The paper describes algorithms and their theoretical properties (e.g., polynomial time complexity), but does not provide any specific links to code repositories, explicit statements about code release, or mention code in supplementary materials.
Open Datasets No The paper does not use any publicly available or open datasets for empirical evaluation. It mentions abstract knowledge bases and theoretical constructs (e.g., 'DL-Lite KBs'). While Example 1 refers to a 'fragment of (slightly modified) NCI KB', this is an illustrative example within a theoretical context, not an accessible dataset for experimental validation.
Dataset Splits No The paper does not conduct experiments on datasets, therefore, it does not provide any information about dataset splits for training, validation, or testing.
Hardware Specification No The paper discusses the theoretical aspects of DL-Lite contraction and revision, including algorithms and their polynomial time complexity. However, it does not mention any specific hardware (GPU models, CPU types, etc.) used for running or evaluating these algorithms or any experiments.
Software Dependencies No The paper discusses formalisms like DL-Lite and ALC, and mentions existing solvers or tools in related work (e.g., CPLEX, Gecode, Choco). However, it does not specify any particular software libraries, platforms, or tools with version numbers that the authors used for their own work or implementation.
Experiment Setup No The paper focuses on theoretical contributions, including the definition of type semantics, representation theorems, and algorithms. It does not describe any empirical experiments, and therefore, no experimental setup details such as hyperparameters, model initialization, or training schedules are provided.