View: session overviewtalk overviewside by side with other conferences
09:00 | Rectifying Classifiers ABSTRACT. Dealing with high-risk or safety-critical applications calls for the development of trustworthy AI systems. Beyond prediction, such systems must offer a number of additional facilities, including explanation and verification. The case when the prediction made is deemed wrong by an expert calls for still another operation, called rectification. Rectifying a classifier aims to guarantee that the predictions made by the classifier (once rectified) comply with the expert knowledge. Here, the expert is supposed more reliable than the predictor, but their knowledge is typically incomplete. Focusing on Boolean classifiers, I will present rectification as a change operation. Following an axiomatic approach, I will give some postulates that must be satisfied by rectification operators. I will show that the family of rectification operators is disjoint from the family of revision operators and from the family of update operators. I will also present a few results about the computation of a rectification operation. |
10:00 | PRESENTER: Sophie Tourret ABSTRACT. Abduction is the task of finding possible extensions of a given knowledge base that would make a given sentence logically entailed. As such, it can be used to explain why the sentence does not follow, to repair incomplete knowledge bases, and to provide possible explanations for unexpected observations. We consider TBox abduction in the lightweight description logic EL, where the observation is a concept inclusion and the background knowledge is a description logic TBox. To avoid useless answers, such problems usually come with further restrictions on the solution space and/or minimality criteria that help sort the chaff from the grain. We argue that existing minimality notions are insufficient, and introduce connection minimality. This criterion rejects hypotheses that use concept inclusions “disconnected” from the problem at hand. We show how to compute a special class of connection-minimal hypotheses in a sound and complete way. Our technique is based on a translation to first-order logic, and constructs hypotheses based on prime implicates. We evaluate a prototype implementation of our approach on ontologies from the medical domain. |
11:00 | PRESENTER: Faiq Miftakhul Falakh ABSTRACT. While semantic approaches for revising knowledge bases are fine-grained and independent of the syntactical forms, they are unable to be straightforwardly applied in the standard semantic DLs. In this paper, we present a characterization of KB revision in DL under the fixed-domain semantics, where the domain is fixed and finite to accommodate the use of knowledge bases adapting a closed-world assumption. We also introduce an instantiation of a model-based revision operator which satisfies all standard postulates using the notion of distance between interpretations. The model set of the revision result is shown to be expressible into a KB in our setting. In addition, by weakening the KB based on certain domain elements, an individual-based revision operator is provided as an alternative approach. |
11:25 | Repairing Ontologies via Kernel Pseudo-Contraction PRESENTER: Vinícius Bitencourt Matos ABSTRACT. Rational agents must have some internal representation of their knowledge or belief system. Belief Revision is a research area that aims at understanding how they should change their representations when they are faced with new information. In a contraction operation, a sentence is removed from a knowledge base and must not be logically entailed by the resulting set. Pseudo-contraction was proposed by Hansson as an alternative to base contraction where some degree of syntax independence is allowed. In this work, we analyse kernel constructions for pseudo-contraction operations and their formal properties. Also, we show the close relationship between concepts and definitions of Belief Revision and Ontology Repair (such as pseudo-contractions and gentle repairs, respectively). |
11:50 | PRESENTER: Federica Di Stefano ABSTRACT. Circumscription is one of the major approaches to bringing non-monotonic (common-sense) reasoning features to first-order logic and related formalisms. In a nutshell, when computing logical entailments from circumscribed first-order theories, we focus on a restricted subset of the theory’s classical models, so that the less plausible models (from the common-sense perspective) are eliminated. Circumscription has been studied for knowledge bases expressed in various Description Logics (DLs), with a focus on understanding the computational complexity of reasoning. Those studies revealed that circumscription causes a dramatic increase in computational complexity in a broad range of DLs. Roughly speaking, this is due to the second-order quantification step that checks the non-existence of a “better” model, e.g., one where some specific predicate that we want to minimize has a smaller extension. In this paper, we consider a new notion of circumscription in DLs, aiming to preserve the key ideas and advantages of classical circumscription while mitigating its impact on the computational complexity of reasoning. Our main idea is to replace the second-order quantification step with a series of (pointwise) local checks on all domain elements and their immediate neighborhood. This approach provides a sound approximation of classical circumscription and is closely related to the notion of pointwise circumscription proposed by Lifschitz for first-order logic. We formalize several variants of pointwise circumscription in DLs and perform a preliminary study of computational complexity in this setting. Our main achievement is to show that, under certain syntactic restrictions, standard reasoning problems like subsumption testing or concept satisfiability for ALCIO knowledge bases with pointwise circumscription are (co)NEXPTIME-complete. |
12:15 | PRESENTER: Martin Homola ABSTRACT. Minimal Hitting Set (MHS) is a well-known and complete method to compute all explanations of an ABox abduction problem. MHS is NP-complete and widely recognized as inefficient. MergeXplain (MXP), on the other hand, is fast, but does not guarantee to find all explanations. MHS-MXP is a hybrid algorithm which adopts the divide-and-conquer heuristics of MXP and combines it with MHS. MHS-MXP is complete and – at least on a part of the inputs – more efficient than MHS. We describe a favourable class of inputs for the hybrid algorithm. We describe an experimental implementation which enables us to perform first preliminary empirical evaluation on this class of inputs. |
Lunches will be held in Taub lobby (CAV, CSF) and in The Grand Water Research Institute (DL, NMR, IJCAR, ITP).
14:00 | Querying Inconsistent Prioritized Data with ORBITS: Algorithms, Implementation, and Experiments (Extended Abstract) PRESENTER: Meghyn Bienvenu ABSTRACT. We investigate practical algorithms for inconsistency-tolerant query answering over prioritized knowledge bases, which consist of a logical theory, a set of facts, and a priority relation between conflicting facts. We consider three well-known semantics (AR, IAR and brave) based upon two notions of optimal repairs (Pareto and completion). Deciding whether a query answer holds under these semantics is (co)NP-complete in data complexity for a large class of logical theories, and SAT-based procedures have been devised for repair-based semantics when there is no priority relation, or the relation has a special structure. The present paper introduces the first SAT encodings for Pareto- and completion-optimal repairs w.r.t. general priority relations and proposes several ways of employing existing and new encodings to compute answers under (optimal) repair-based semantics, by exploiting different reasoning modes of SAT solvers. The comprehensive experimental evaluation of our implementation compares both (i) the impact of adopting semantics based on different kinds of repairs, and (ii) the relative performances of alternative procedures for the same semantics. |
14:25 | PRESENTER: Diego Calvanese ABSTRACT. We formally introduce ontology-based data federation (OBDF), to denote a framework combining ontology-based data access (OBDA) with a data federation layer, which virtually exposes multiple heterogeneous sources as a single relational database. In this setting, the SQL queries generated by the OBDA component by translating user SPARQL queries are further transformed by the data federation layer so as to be efficiently executed over the data sources. The structure of these SQL queries directly affects their execution time in the data federation layer and their optimization is crucial for performance. We propose here novel optimizations specific for OBDF, which are based on "hints" about existing data redundancies in the sources, empty join operations, and the need for materialized views. Such hints can be systematically inferred by analyzing the OBDA mappings and ontology and exploited to simplify the query structure. We also carry out an experimental evaluation in which we show the effectiveness of our optimizations. |
14:50 | PRESENTER: Mateusz Urbańczyk ABSTRACT. A categorical approach to study model comparison games in terms of comonads was recently initiated by Abramsky et al. In this work, we analyse games that appear naturally in the context of description logics and knowledge representation. We consider expressive sublogics of ALCIOQbSelf, namely, the logics that extend ALC with any combination of inverses, nominals, number restrictions, safe boolean roles combinations and the Self operator. Our construction augments and modifies the so-called modal comonad by Abramsky and Shah. The approach that we took heavily relies on the use of relative monads, which we leverage to encapsulate additional capabilities within the bisimilation games in a compositional manner. |
15:05 | PRESENTER: Philippe Balbiani ABSTRACT. This paper is about the integration in a unique formalism of knowledge representation languages such as those provided by description logic languages and rule-based reasoning paradigms such as those provided by logic programming languages. We aim at creating an hybrid formalism where description logics constructs are used for defining concepts that are given as arguments to the predicates of the logic programs. |
16:00 | PRESENTER: Patrick Koopmann ABSTRACT. The inexpressive Description Logic (DL) FL0, which has conjunction and value restriction as its only concept constructors, had fallen into disrepute when it turned out that reasoning in FL0 w.r.t. general TBoxes is ExpTime-complete, that is, as hard as in the considerably more expressive logic ALC. In this paper, published in the journal Theory and Practice of Logic Programming, we rehabilitate FL0 by presenting a dedicated subsumption algorithm for FL0 , which is much simpler than the tableau-based algorithms employed by highly optimized DL reasoners. Our experiments show that the performance of our novel algorithm, as prototypically implemented in our FL0wer reasoner, compares very well with that of the highly optimized reasoners. FL0wer can also deal with ontologies written in the extension FLbot of FL0 with the top and the bottom concept by employing a polynomial-time reduction, shown in this paper, which eliminates top and bottom. We also investigate the complexity of reasoning in DLs related to the Horn-fragments of FL0 and FLbot. |
16:25 | PRESENTER: Jean Christoph Jung ABSTRACT. Learning, in Angluin’s framework of exact learning, a query in the presence of a description logic ontology often involves as a crucial (iterated) step the generalization of a hypothesis query. This has been done, for example, by constructing a least general common generalization of the hypothesis and a counterexample that was provided by the oracle. In this research note, we observe that it is useful to instead use a more liberal construction that uses the counterexample as a guide to produce a generalization of the hypothesis, but does not necessarily generalize the counterexample. This approach allows us to learn in polynomial time ELI concept queries in the presence of DL-LiteF-Horn ontologies. |
16:50 | A new dimension to generalization: computing temporal EL concepts from positive examples (Extended Abstract) PRESENTER: Satyadharma Tirtarasa ABSTRACT. The authoring of complex concepts or (instance) queries written in a description logic (DL) can be a difficult task. An established approach to generate such concepts from positive examples is to employ the most specific concept (msc), which generalizes an ABox individual into a concept and the least common subsumer (lcs), which generalizes a collection of concepts into a single concept. These inferences are investigated for the EL, but so far there are no methods for the 2-dimensional case such as temporalized DLs. We report in this abstract on our computation algorithms for the lcs and the msc in a temporalized DL: EL extended by the LTL operators next and global. We sketch the computation algorithms for both inferences---with and without the use of rigid symbols. |
17:15 | PRESENTER: Laura Giordano ABSTRACT. In this extended abstract we report about an approach for reasoning about actions with domain descriptions including an EL⊥ ontology in a temporal action theory. The action theory is based on a Dynamic Linear Time Temporal Logic, and extensions are defined through temporal answer sets. The work provides conditions under which action consistency can be guaranteed with respect to an EL⊥ ontology, by polynomially encoding an EL⊥ knowledge base into the domain description of the temporal action theory. |
pickup at 18:00 from the Technion (Tour at Haifa, no food will be provided)