View: session overviewtalk overviewside by side with other conferences
10:45 | Exact Learning of Lightweight Description Logic Ontologies SPEAKER: unknown ABSTRACT. We study polynomial time learning of description logic TBoxes in Angluin et al.'s framework of exact learning via queries. We admit entailment queries ("is a given subsumption entailed by the target TBox?") and equivalence queries ("is a given TBox equivalent to the target TBox?"), assuming that the signature and logic of the target TBox are known. We present four main results: (1) TBoxes formulated in DL-Lite with role inclusions and ELI concepts on the right-hand side of concept inclusions can be learned in polynomial time. (2) Neither general nor acyclic EL TBoxes can be learned in polynomial time. (3) EL TBoxes with only concept names on the right-hand side of concept inclusions can be learned in polynomial time. It follows, in particular, that non-polynomial time learnability of EL TBoxes is caused by the interaction between existential restrictions on the right and left-hand side of concept inclusions. Finally, we show that neither entailment nor equivalence queries alone are sufficient for learning TBoxes in polynomial time for any of the description logics in (1)-(3). |
11:15 | Finite Model Reasoning in Horn Description Logics SPEAKER: Yazmin Ibanez-Garcia ABSTRACT. We study finite model reasoning in the popular expressive Horn description logics (DLs) Horn-SHIF and Horn-SHIQ, starting with a reduction of finite ABox consistency to unrestricted ABox consistency. The reduction relies on reversing certain cycles in the TBox, an approach that originated in database theory, was later adapted to the inexpressive DL DL-Lite_F, and is shown here to extend all the way to Horn-SHIQ. The model construction used to establish correctness sheds much more light on the structure of finite models than existing approaches to finite model reasoning in (non-Horn) DLs which rely on solving systems of inequations over the integers. Since the reduction incurs an exponential blow-up, we then devise a dedicated consequence-based algorithm for finite ABox consistency in Horn-SHIQ that implements the reduction on-the-fly rather than executing it up-front. The algorithm has optimal worst-case complexity and provides a promising foundation for implementations. Finally, we show that our approach can be adapted to finite (positive existential) query answering relative to Horn-SHIF TBoxes, which allows us to prove that this problem is ExpTime-complete in combined complexity and PTime-complete in data complexity. |
11:45 | Lightweight Description Logics and Branching Time: a Troublesome Marriage SPEAKER: Thomas Schneider ABSTRACT. We study branching-time temporal description logics (BTDLs) based on the temporal logic CTL in the presence of rigid (time-invariant) roles and general TBoxes. There is evidence that, if full CTL is combined with the classical ALC in this way, then reasoning becomes undecidable. In this paper, we begin by substantiating this claim, establishing undecidability for a fragment of this combination. In view of this negative result, we then investigate BTDLs that emerge from combining fragments of CTL with lightweight DLs from the EL and DL-Lite families. We show that even rather inexpressive BTDLs based on EL exhibit very high complexity. Most notably, we identify two convex fragments which are undecidable and hard for non-elementary time, respectively. For BTDLs based on DL-Lite, we obtain tight complexity bounds that range from PSPACE to EXPTIME. |
10:45 | Belief Change and Base Dependence SPEAKER: Mehrdad Oveisi ABSTRACT. A strong intuition for AGM belief change operations, Gärdenfors suggests, is that formulas that are independent of a change should remain intact. Based on this intuition, Fariñas and Herzig axiomatize a dependence relation w.r.t. a belief set, and formalize the connection between dependence and belief change. In this paper, we introduce base dependence as a relation between formulas w.r.t. a belief base. After an axiomatization of base dependence, we formalize the connection between base dependence and a particular belief base change operation, saturated kernel contraction. Moreover, we prove that base dependence is a reversible generalization of Fariñas and Herzig’s dependence. That is, in the special case when the underlying belief base is deductively closed (i.e., it is a belief set), base dependence reduces to dependence. Finally, an intriguing feature of Fariñas and Herzig’s formalism is that it meets other criteria for dependence, namely, Keynes’ conjunction criterion for dependence (CCD) and Gärdenfors’ conjunction criterion for independence (CCI). We show that our base dependence formalism also meets these criteria. More interestingly, we offer a more specific criterion that implies both CCD and CCI, and show our base dependence formalism also meets this new criterion. |
11:15 | Justified Beliefs by Justified Arguments SPEAKER: unknown ABSTRACT. The paper addresses how the information state of an agent relates to the arguments that the agent endorses. Information states are modeled in doxastic logic and arguments by recasting abstract argumentation theory in a modal logic format. The two perspectives are combined by an application of the theory of product logics, delivering sound and complete systems in which the interaction of arguments and beliefs is investigated. |
11:45 | Belief Change Operations: A Short History of Nearly Everything, Told in Dynamic Logic of Propositional Assignments SPEAKER: Andreas Herzig ABSTRACT. We examine several belief change operations in the light of Dynamic Logic of Propositional Assignments DL-PA. We show that we can encode in a systematic way update operations (such as Winslett's `Possible Models Approach') and revision operations (such as Dalal's) as particular DL-PA programs. Every DL-PA formula being equivalent to a boolean formula, one obtains syntactical counterparts for all these belief change operations. |
12:15 | On Redundant Topological Constraints (Extended Abstract) SPEAKER: unknown ABSTRACT. The Region Connection Calculus (RCC) is a well-known calculus for representing part-whole and topological relations. It plays an important role in qualitative spatial reasoning, geographical information science, and ontology. The computational complexity of reasoning with RCC has been investigated in depth in the literature. Most of these works focus on the consistency of RCC constraint networks. In this paper, we consider the important problem of redundant RCC constraints. For a set $\Gamma$ of RCC constraints, we say a constraint (x R y) in $\Gamma$ is redundant if it can be entailed by the rest of $\Gamma$. A prime network of $\Gamma$ is a subset of $\Gamma$ which contains no redundant constraints but has the same solution set as $\Gamma$. It is natural to ask how to compute a prime network, and when it is unique. In this paper, we show that this problem is in general co-NP hard, but becomes tractable if $\Gamma$ is over a tractable subclass of RCC. If S is a tractable subclass in which weak composition distributes over non-empty intersections, then we can show that $\Gamma$ has a unique prime network, which is obtained by removing all redundant constraints from $\Gamma$. As a byproduct, we identify a sufficient condition for a path-consistent network being minimal. |
12:20 | Knowledge Maps of Web Graphs SPEAKER: unknown ABSTRACT. In this paper we investigate the problem of building knowledge maps of graph-like information, motivated by representing knowledge on the Web. We introduce a mathematical formalism that captures the notion of map of a graph and enables its development and manipulation in a semi-automated way. We present an implementation of our formalism over the Web of Linked Data graph and discuss algorithms to efficiently generate and combine (via an algebra) regions and maps. We present the MaGE tool and discuss some examples of knowledge maps. |
12:25 | On the Progression of Knowledge in Multiagent Systems SPEAKER: Vaishak Belle ABSTRACT. In a seminal paper, Lin and Reiter introduced the progression of basic action theories in the situation calculus. In this paper, we study the progression of knowledge in multiagent settings, where after actions, an agent updates his beliefs but also updates what he believes other agents know given what has occurred. By appealing to the notion of only knowing, we are able to avoid limitations of earlier work on multiagent progression, and obtain a new general account: we show that after an action, knowledge bases are updated in a Lin and Reiter fashion at every nesting of modalities. Consequently, recent results on the first-order definability of progression carry over to a multiagent setting without too much effort. |
12:30 | Heuristic Guided Optimization for Propositional Planning SPEAKER: unknown ABSTRACT. Planning as Satisfiability is an important approach to Propositional Planning. A serious drawback of the method is its limited scalability, as the instances that arise from large planning problems are often too hard for modern SAT solvers. This work tackles this problem by combining two powerful techniques that aim at decomposing a planning problem into smaller sub-problems, so that the satisfiability instances that need to be solved do not grow prohibitively large. The first technique, incremental goal achievement, turns planning into a series of boolean optimization problem, each seeking to maximize the number of goals that are achieved within a limited planning horizon. This is coupled with a second technique, called heuristic guidance, that directs search towards a state that satisfies all goals. Heuristic guidance is based on the combination of a number of constraint relaxation techniques of varying strength. Initial experiments with a system that implements these ideas demonstrate that enriching propositional satisfiability based planning with these methods delivers a competitive planning algorithm that is capable of generating plans of good quality for challenging problems in different domains. |
12:35 | Action Theories over Generalized Databases with Equality Constraints (Extended Abstract) SPEAKER: unknown ABSTRACT. Situation calculus action theories allow full first-order expressiveness but, typically, restricted cases are studied such that projection or progression become decidable or first-order, respectively, and computationally feasible. In this work we focus on KBs that are specified as generalized databases with equality constraints, thus able to finitely represent complete information over possibly infinite number of objects. First, we show that this form characterizes the class of definitional KBs and provide a transformation for putting KBs in this form that we call \emph{generalized fluent DB}. Then we show that for action theories over such KBs, the KBs are closed under progression, and discuss how this view exposes some differences with existing progression methods compared to DB update. We also look into the temporal projection problem and show how queries over these theories can be decided based on an induced transition system and evaluation of local conditions over states. In particular, we look into a wide class of generalized projection queries that include quantification over situations and prove that it is decidable under a practical restriction. The proposed action theories are to date the most expressive ones for which there are known decidable methods for computing both progression and generalized projection. |
12:40 | Representing and Reasoning About Time Travel Narratives: Foundational Concepts SPEAKER: Leora Morgenstern ABSTRACT. The paper develops a branching-time ontology that maintains the classical restriction of forward movement through a temporal tree structure, but permits the representation of paths in which one can perform inferences about time-travel scenarios. Central to the ontology is the notion of an agent embodiment whose beliefs are equivalent to those of an agent who has time-traveled from the future. We discuss the formalization of several example scenarios, define what it means for such scenarios to be motivated with respect to an agent embodiment, and relate the underlying structure to a more general theory of (non-time-travel) narrative. |
12:45 | Canonical Logic Programs are Succinctly Incomparable with Propositional Formulas SPEAKER: unknown ABSTRACT. Canonical (logic) programs} (CP) refer to \emph{normal} programs (LP) augmented with connective $not\ not$. In this paper we address the question of whether CP are \emph{succinctly incomparable} with \emph{propositional formulas} (PF). Our main result shows that the PARITY problem, which can be polynomially represented in PF, \emph{only} has exponential representations in CP. In other words, PARITY \emph{separates} PF from CP. Simply speaking, this means that exponential size blowup is generally inevitable when translating a set of formulas in PF into an equivalent program in CP (without introducing new variables). Furthermore, since it has been shown by Lifschitz and Razborov that there is also a problem which separates CP from PF (assuming $\mathsf{P}\nsubseteq \mathsf{NC^1/poly}$), it follows that the two formalisms are indeed succinctly incomparable. In addition, we show that PARITY separates logic programs with \emph{cardinality constraints} and \emph{choice rules} (CC) from CP. Moreover, assuming $\mathsf{P}\nsubseteq \mathsf{NC^1/poly}$, CP and \emph{definite causal theories} (DT) are succinctly incomparable, \emph{two-valued} programs (TV) are strictly more succinct than CP and DT. |
12:50 | Using Answer Set Programming for Solving Boolean Games SPEAKER: Sofie De Clercq ABSTRACT. Boolean games are a framework for reasoning about the rational behaviour of agents whose goals are formalised using propositional formulas. They offer an attractive alternative to normal-form games, because they allow for a more intuitive and more compact encoding. Unfortunately, however, there is currently no general, tailor-made method available to compute the equilibria of Boolean games. In this paper, we introduce a method for finding the pure Nash equilibria based on disjunctive answer set programming. Our method is furthermore capable of finding the core elements and the Pareto optimal equilibria, and can easily be modified to support other forms of optimality, thanks to the declarative nature of disjunctive answer set programming. Experimental results clearly demonstrate the effectiveness of the proposed method. |
12:55 | ASP Encodings of Acyclicity Properties SPEAKER: unknown ABSTRACT. Many knowledge representation tasks involve trees or similar structures as abstract datatypes. However, devising compact and efficient declarative representations of such properties is non-obvious and can be challenging indeed. In this paper, we take acyclicity properties into consideration and investigate logic-based approaches to encode them. We use answer set programming as the primary representation language but also consider mappings to related formalisms, such as propositional logic, difference logic, and linear programming. |
13:00 | Stable Models of Multi-Valued Formulas: Partial vs. Total Functions SPEAKER: Joohyung Lee ABSTRACT. Recent extensions of the stable model semantics that allow "intensional" functions--functions that can be specified by logic programs using other functions and predicates--can be divided into two groups. One group defines a stable model in terms of minimality on the values of partial functions, and the other defines it in terms of uniqueness on the values of total functions. We show that, in the context of multi-valued formulas, these two different approaches can be reduced to each other, and further, each of them can be viewed in terms of propositional formulas under the stable model semantics. Based on these results, we present a prototype implementation of different versions of functional stable model semantics by using existing answer set solvers. |
12:15 | First-Order Default Logic Revisited SPEAKER: Yi Zhou ABSTRACT. It is well known that Reiter's original proposal for default logic in the first-order case is problematic because of Skolemization. This paper reconsiders this long-standing open problem, and proposes a new world view semantics for first-order default logic. Roughly speaking, a world view of a first-order default theory is a maximal collection of structures satisfying the default theory where the default part is fixed by the world view itself. We show that how this semantics generalizes propositional/closed default logic, classical first-order logic and first-order answer set programming, and we argue that first-order default logic under the world view semantics provides a rich framework for integrating classical logic and rule-based formalism in the first-order case. |
12:20 | Strong Equivalence of Non-monotonic Temporal Theories SPEAKER: Martín Diéguez ABSTRACT. Temporal Equilibrium Logic (TEL) is a non-monotonic temporal logic that extends Answer Set Programming (ASP) by introducing modal operators as those considered in Linear-time Temporal Logic (LTL). TEL allows proving temporal properties of ASP-like scenarios under the hypothesis of infinite time while keeping decidability. Formally, it extends Equilibrium Logic (the best-known logical formalisation of ASP) and, as the latter, it selects some models of a (temporal) monotonic basis: the logic of Temporal Here-and-There (THT). In this paper we solve a problem that remained unanswered for the last six years: we prove that equivalence in the logic of THT is not only a sufficient, but also a necessary condition for strong equivalence of two TEL theories. This result has both theoretical and practical consequences. First, it reinforces the need of THT as a suitable monotonic basis for TEL. Second, it has allowed constructing a tool, ABSTEM, that can be used to check different types of equivalence between two arbitrary temporal theories. More importantly, when the theories are not THT-equivalent, the system provides a context theory that makes them behave differently, together with a Buchi automaton showing the temporal stable models that arise from that difference. |
12:25 | Belief Revision in the Propositional Closure of a Qualitative Algebra SPEAKER: unknown ABSTRACT. Belief revision is an operation that aims at modifying old beliefs so that they become consistent with new ones. The issue of belief revision has been studied in various formalisms, in particular, in qualitative algebras (QAs) in which the result is a disjunction of belief bases that is not necessarily representable in a QA. This motivates the study of belief revision in formalisms extending QAs, namely, their propositional closures: in such a closure, the result of belief revision belongs to the formalism. Moreover, this makes it possible to define a contraction operator thanks to the Harper identity. Belief revision in the propositional closure of QAs is studied, an algorithm for a family of revision operators is designed, and an open-source implementation is made freely available on the web. |
12:30 | Minimal Change in AGM Revision for Non-classical Logics SPEAKER: unknown ABSTRACT. In this paper, we address the problem of applying AGM-style belief revision to non-classical logics. We discuss the idea of minimal change in revision and show that for non-classical logics, some sort of minimality postulate has to be explicitly introduced. We also present two constructions for revision which satisfy the AGM postulates and prove the representation theorems including minimality postulates. For each result, we point out the class of logics for which the theorem holds, giving some examples at the end. |
12:35 | Toward a Knowledge Level Analysis of Forgetting SPEAKER: James Delgrande ABSTRACT. Forgetting has been addressed in different areas of Knowledge Representation, including classical logic, logic programming, modal logic, and description logics. In this paper, we present an account of forgetting as an abstract belief change operator, independent of an underlying logic. We argue that forgetting amounts to a reduction in the language, specifically the signature, of a logic. The central definition is simple: the result of forgetting signature $\sigma$ in a theory is simply the set of logical consequences over the reduced language. This definition offers various advantages. It provides a uniform approach to forgetting, with a single definition applicable to any logic with a well-defined consequence relation. Results obtained are obviously applicable to all subsumed formal systems, and in fact many results are obtained much more straightforwardly. We argue that the perspective provided by the approach leads to simpler approaches to computing forgetting in specific logics. This view also leads to insights with respect to specific logics: forgetting in first-order logic is somewhat different from the accepted approach; the definition relativised to logic programs yields a new syntax-independent notion of forgetting; in modal logic the specification is simpler. Moreover, the obtained perspective clarifies the relation between forgetting and the belief change operation of contraction. |
12:40 | An Abductive Reasoning Approach to the Belief-Bias Effect SPEAKER: Emmanuelle-Anna Dietz ABSTRACT. The tendency to accept or reject arguments based on own beliefs or prior knowledge rather than on the reasoning process is called the belief bias. A psychological syllogistic reasoning task shows this phenomenon, wherein participants were asked whether they accept or reject a given syllogism. By introducing abnormalities, abduction and background knowledge, we model this task under the weak completion semantics. Our formalization reveals new questions about observations and their explanations which might include some relevant prior abductive contextual information concerning some side-effect. Inspection points, introduced by Pereira and Pinto, allow us to express these definitions syntactically and intertwine them into an operational semantics. |
12:45 | Tracking Beliefs and Intentions in the Werewolf Game SPEAKER: Codruta Girlea ABSTRACT. We propose a formalization framework for modeling how beliefs and intentions change over the course of a dialogue, in the case where the decisions taken during the dialogue affect the possibly conflicting goals of the agents involved. We illustrate our formalization with the game of Werewolf, as an example of such a domain. We use Situation Calculus to model the evolution of the world and an observation model to analyze the evolution of intentions and beliefs. In our formalization, utterances, that only change the beliefs and intentions, are observations. The beliefs and intentions are modeled with Kripke structure-style accessibility relation predicates. We illustrate our model on instances of the game. |
12:50 | Axioms .2 and .4 as Interaction Axioms SPEAKER: Guillaume Aucher ABSTRACT. In epistemic logic, some axioms dealing with the notion of knowledge are rather convoluted and it is difficult to give them an intuitive interpretation, even if some of them, like .2 and .4, are considered by some epistemic logicians to be key axioms. We show that they can be characterized in terms of understandable interaction axioms relating knowledge and belief. In order to show it, we first present a theory dealing with the characterization of axioms in terms of interaction axioms in modal logic. We then apply the main results and methods of this theory to obtain our results related to epistemic logic. |
12:55 | Aggregative Deontic Detachment for Normative Reasoning SPEAKER: unknown ABSTRACT. In this paper, we provide foundations for deontic logic, by defining an undemanding semantics based on the sole notion of detachment. We introduce System O, which handles iteration of successive detachments in a more principled manner than the traditional systems do. This is achieved by injecting a new form of deontic detachment, called aggregative deontic detachment. It allows to keep track of previously detached obligations. Soundness and completeness of the proof system are established. Properties of the logic are discussed. |
14:30 | Skolemization for Weighted First-Order Model Counting SPEAKER: unknown ABSTRACT. First-order model counting recently emerged as a novel reasoning task, at the core of efficient algorithms for probabilistic logics. We present a Skolemization algorithm for model counting problems that eliminates existential quantifiers from a first-order logic theory without changing its weighted model count. For subsets of first-order logic, lifted model counters were shown to run in time polynomial in the number of objects in the domain of discourse, where propositional model counters require exponential time. However, these guarantees only apply to Skolem normal form theories, and the presence of existential quantifiers reduces lifted model counters to propositional ones. Since textbook Skolemization is not sound for model counting, these restrictions precluded efficient model counting for directed models such as probabilistic logic programs. Our Skolemization procedure extends the applicability of first-order model counters to these representations. Moreover, it simplifies the design of lifted model counting algorithms. |
15:00 | Analyzing the Computational Complexity of Abstract Dialectical Frameworks via Approximation Fixpoint Theory SPEAKER: Johannes Peter Wallner ABSTRACT. Abstract dialectical frameworks (ADFs) have recently been proposed as a versatile generalization of Dung's abstract argumentation frameworks (AFs). In this paper, we present a comprehensive analysis of the computational complexity of ADFs. Our results show that while ADFs are one level up in the polynomial hierarchy compared to AFs, there is a useful subclass of ADFs which is as complex as AFs while arguably offering more modeling capacities. As a technical vehicle, we employ the approximation fixpoint theory of Denecker, Marek and Truszczyński, thus showing that it is also a useful tool for complexity analysis of operator-based semantics. |
15:30 | The Parameterized Complexity of Reasoning Problems Beyond NP SPEAKER: unknown ABSTRACT. Today's propositional satisfiability (SAT) solvers are extremely powerful and can be used as an efficient back-end for NP-complete problems. However, many fundamental problems in knowledge representation and reasoning are located at the second level of the Polynomial Hierarchy or even higher, and hence polynomial-time transformations to SAT are not possible, unless the hierarchy collapses. Recent research shows that in certain cases one can break through these complexity barriers by fixed-parameter tractable (fpt) reductions which exploit structural aspects of problem instances in terms of problem parameters. In this paper we develop a general theoretical framework that supports the classification of parameterized problems on whether they admit such an fpt-reduction to SAT or not. We instantiate our theory by classifying the complexities of several case study problems, with respect to various natural parameters. These case studies include the consistency problem for disjunctive answer set programming and a robust version of constraint satisfaction. |
08:45 | VSL Keynote Talk: Verification of Computer Systems with Model Checking SPEAKER: Edmund Clarke ABSTRACT. Model Checking is an automatic verification technique for large state transition systems. The technique has been used successfully to debug complex computer hardware and communication protocols. Now, it is beginning to be used for complex hybrid (continuous/discrete) systems as well. The major disadvantage of the technique is a phenomenon called the State Explosion Problem. This problem is impossible to avoid in worst case. However, by using sophisticated data structures and clever search algorithms, it is now possible to verify hybrid systems with astronomical numbers of states. |