previous day
next day
all days

View: session overviewtalk overview

09:00-10:40 Session 10: Theory & Foundations I
Location: Grand Kentucky Salon B
Infinitary Equilibrium Logic and Strong Equivalence

ABSTRACT. Strong equivalence of logic programs is an important concept in the theory of answer set programming. Equilibrium logic was used to show that propositional formulas are strongly equivalent if and only if they are equivalent in the logic of here-and-there. We extend equilibrium logic to formulas with infinitely long conjunctions and disjunctions, define and axiomatize an infinitary counterpart to the logic of here-and-there, and show that the theorem on strong equivalence holds in the infinitary case as well.

Characterising and Explaining Inconsistency in Logic Programs

ABSTRACT. A logic program under the answer set semantics can be inconsistent because its only answer set is the set of all literals, or because it does not have any answer sets. In both cases, the reason for the inconsistency may be (1) only explicit negation, (2) only negation as failure, or (3) the interplay between these two kinds of negation. Overall, we identify four different inconsistency cases, and show how the respective reason can be further characterised by a set of “culprits” using semantics which are “weaker” than the answer set semantics. We also provide a technique for explaining the set of culprits in terms of trees whose nodes are derivations. This can be seen as an important first step towards debugging inconsistent logic programs.

Enablers and Inhibitors in Causal Justifications of Logic Programs
SPEAKER: Pedro Cabalar

ABSTRACT. In this paper we propose an extension of logic programming (LP) where each default literal derived from the well-founded model is associated a justification in the form of an algebraic expression. This expression contains both causal explanations (in the form of proof graphs built with rule labels) and terms under the scope of negation that stand for conditions that enable or disable the application of causal rules. Using some examples, we discuss how these new conditions, we respectively call enablers and inhibitors, are intimately related to default negation and have an essentially different nature from regular causeeffect relations. The most important result is a formal comparison to the recent algebraic approaches for justifications in LP: Why-not Provenance (WnP) and Causal Graphs (CG). We show that the current approach extends both WnP and CG justifications under theWell-Founded Semantics and, as a byproduct, we also establish a formal relation between these two approaches.

A Formal Theory of Justifications
SPEAKER: Marc Denecker

ABSTRACT. We develop an abstract theory of justifications suitable for describing the semantics of a range of logics in knowledge representation, computational and mathematical logic. A theory or program in one of these logics induces a semantical structure called a justification frame. Such a justification frame defines a class of justifications each of which embodies a reason why its facts are true. By defining various evaluation functions for these justifications, a range of different semantics are obtained. The theory thus provides elegant and compact formalisations of existing and new semantics in logics of various areas, thus showing unexpected commonalities and interrelations. Another use of the framework is to integrate various language constructs.

11:10-12:40 Session 11: Short Papers I
Location: Grand Kentucky Salon B
Digital Forensics Evidence Analysis: An Answer Set Programming Approach for Generating Investigation Hypotheses

ABSTRACT. The results of the Evidence Analysis phase in Digital Forensics (DF) provides objective data which however require further elaboration by the investigators, that have to contextualize Analysis results within an investigative environment so as to provide possible hypotheses that can be proposed as proofs in court, to be evaluated by lawyers and judges. Aim of our research has been that of exploring the applicability of Answer Set Programming (ASP) to the automatization of Evidence Analysis. This offers many advantages, among which that of making different possible investigative hypotheses explicit, while otherwise different human experts often devise and select different solutions in an implicit way. Moreover, ASP provides a potential for verifiability which is crucial in such an application field. Very complex investigations for which human experts can hardly find solutions turn out in fact to be reducible to optimization problems in classes P or NP or not far beyond, that can be thus expressed in ASP. As a proof of concept, in this paper we present the formulation of some real investigative cases via simple ASP programs, and discuss how this leads to the formulation of concrete investigative hypotheses.

"Add Another Blue Stack of the Same Height!'': Plan Failure Analysis and Interactive Planning Through Natural Language Communication
SPEAKER: Tran Cao Son

ABSTRACT. We discuss a challenge in developing intelligent agents (robots) that can collaborate with human in problem solving. Specifically, we consider situations in which a robot must use natural language in communicating with human and responding to the human's communication appropriately. In the process, we identify three main tasks. The first task requires the development of planners capable of dealing with descriptive goals. The second task, called plan failure analysis, demands the ability to analyze and determine the reason(s) why the planning system does not success. The third task focuses on the ability to understand communications via natural language. We show how the first two tasks can be accomplished in answer set programming and discuss how the third task could be solved using the available tools and answer set programming and identify the challenges that need to be addressed.

ASP, Amalgamation, and the Conceptual Blending Workflow
SPEAKER: Manfred Eppe

ABSTRACT. We present an amalgamation technique used for conceptual blending -- an inference pattern for concept invention that is advocated in cognitive science as a fundamental, and uniquely human engine for creative thinking. Herein, we employ the search capabilities of ASP to find commonalities among input concepts as part of the blending process, and we show how our approach fits within a generalized conceptual blending workflow. Specifically, we use the integration of imperative programming languages like Python in modern ASP solvers like clingo v.4, to connect to external tools for theorem proving and colimit computation. We evaluate our system with examples from various domains where creativity is important, in particular mathematics and music.

An Implementation of Consistency-Based Multi-Agent Belief Change using ASP

ABSTRACT. This paper presents an implementation of a general framework for consistency-based belief change using Answer Set Programming (ASP). Belief change problem instances are graphs, which can be interpreted as networks of communicating agents. Each agent maintains a set of beliefs, expressed as formulas of a finite propositional language. Agents share information through a maximization procedure, incorporating as much information as consistently possible from other agents connected to them. Belief change operations such as revision, contraction, and merging are special cases of this approach, and can be encoded using special types of graphs (such as chains, star graphs, and complete graphs)..

In this paper, we describe Equibel, a software system for working with belief change operations on arbitrary graph topologies. The system has an ASP component that performs the core maximization procedure, and a Python component that performs additional processing on the output of the ASP solver to produce a final result. The Python component also provides an interactive CLI that allows users to create a graph, set formulas at nodes, perform belief change operations, and query the resulting graph.

A Theory of Intentions for Intelligent Agents

ABSTRACT. This paper describes the AIA architecture for intelligent agents whose behavior is driven by their intentions and who reason about, and act in, changing environments. The description of the domain, written in action language AL, includes both a description of agent's environment and the Theory of Intentions, which describes properties of intentions. The architecture is designed to make agents capable of explaining unexpected observations (by means of diagnostic reasoning) and determining which of his actions are intended at the present moment. Reasoning tasks are reduced to computing answer sets of CR-Prolog programs constructed automatically from the agent's knowledge.

A Framework for Goal-Directed Query Evaluation with Negation
SPEAKER: Stefan Brass

ABSTRACT. In deductive databases, the magic set method is the standard way to make bottom-up evaluation goal-directed. Our SLDMagic method, introduced earlier by the author, solves several problems of the magic set method by simulating SLD-resolution on a bottom-up machine. In particular, it contains a tail-recursion optimization. SLDMagic was based on partial evaluation from the beginning, but in further work, we discussed ways to significantly strengthen the pre-computation at "compile-time''. However, these methods did not handle negation. It is the purpose of this paper to remedy this problem. We use conditional facts introduced in the literature, and further developed by Juergen Dix and the author.

14:00-15:40 Session 12: Action Languages
Location: Grand Kentucky Salon B
Online Action Language oBC+
SPEAKER: Joohyung Lee

ABSTRACT. We present an online action language called oBC+, which extends action language BC+ to handle external events arriving online. This is done by first extending the concept of online answer set solving to arbitrary propositional formulas, and then defining the semantics of oBC+ based on this extension, similar to the way the offline BC+ is defined. The design of oBC+ ensures that any action description in oBC+ satisfies the syntactic conditions required for the correct computation of online answer set solving, thereby alleviates the user's burden for checking the sophisticated conditions. 


On the Relationship between Two Modular Action Languages: A Translation from MAD into ALM

ABSTRACT. Modular action languages MAD and ALM share the goal of providing means for the reuse of knowledge in order to facilitate the creation of libraries of knowledge. They differ substantially in their underlying assumptions (Causality Principle for MAD, Inertia Axiom for ALM) and in the constructs that enable the reuse of knowledge, especially the mechanisms used to declare actions in terms of already described actions. In this paper, we investigate the relationship between the two action languages by providing a translation from MAD into ALM. We specify a condition that ensures that, for a specific class of MAD action descriptions, our translation produces a transition diagram isomorphic to the original one, modulo the common vocabulary.

Knowledge Acquisition via Non-Monotonic Reasoning in Distributed Heterogeneous Environments

ABSTRACT. Distributed autonomous evolving applications are the realm of intelligent software agents and Multi-Agent Systems (MAS). Many approaches to MAS are based upon computational logic, where logical agents are able to reason, and to perform non-monotonic reasoning when needed. The role of data, and more generally of knowledge, is becoming central in such systems. Logic-based data management is therefore an important issue in logical agents. The approach of DACMAS proposes a quite general modeling of multi-agent systems, including data representation in a MAS via DRL-Lite Ontologies, and commitment-based communication. Yet, the aspect is missing of making a MAS able to acquire knowledge from contexts (heterogeneous sources) which are not agents and which are external to the MAS. Moreover, in a realistic setting different contexts can be based upon different logics, and modalities for information exchange should be defined in a non-monotonic fashion. In the Knowledge Representation and Reasoning field, this topic is coped with by mMCSs (Managed Multi-Context Systems). In this paper, we propose an integration of the two approaches into DACMACSs (Data-Aware Commitment-based managed Multi- Context MAS), so as to obtain an enhanced integrated flexible framework retaining the nice formal properties of the original proposals. In DACMACSs, non-monotonicity is present (like in mMCSs) in the modalities for defining knowledge acquisition. However, in DACMACSs also the conditions for triggering the acquisition are defined non-monotonically, and depend upon an agent's internal evolution and interaction with the environment.

Diagnostic Reasoning for Robotics using Action Languages
SPEAKER: Esra Erdem

ABSTRACT. We introduce a novel diagnostic reasoning method for robotic systems with multiple robots, to find the causes of observed discrepancies relevant for plan execution. Our method proposes (i) a systematic modification of the robotic action domain description by utilizing defaults, and (ii) algorithms to compute a smallest set of diagnoses (e.g., broken robots) by means of hypothetical reasoning over the modified formalism. The proposed method is applied over various robotic scenarios in cognitive factories.

16:10-17:10 Session 13: Invited Talk: Pedro Cabalar
Location: Grand Kentucky Salon B
Invited Talk: Stable Models for Temporal Theories
SPEAKER: Pedro Cabalar

ABSTRACT. This work makes an overview on an hybrid formalism that combines the syntax of Linear-time Temporal Logic (LTL) with a nonmonotonic selection of models based on Equilibrium Logic. The resulting approach, called Temporal Equilibrium Logic, extends the concept of a stable model for any arbitrary modal temporal theory, constituting a suitable formal framework for the speci cation and veri cation of dynamic scenarios in Answer Set Programming (ASP). We will recall the basic de nitions of this logic and explain their e ects on some simple examples. After that, we will proceed to summarize the advances made so far, both in the fundamental realm and in the construction of reasoning tools. Finally, we will explain some open topics, many of them currently under study, and foresee potential challenges for future research.

17:10-18:00 Session 14: Uncertainty
Location: Grand Kentucky Salon B
Solving disjunctive fuzzy answer set programs

ABSTRACT. Fuzzy Answer Set Programming (FASP) is an extension of the popular Answer Set Programming (ASP) paradigm which is tailored for continuous domains. Despite the existence of several prototype implementations, none of the existing solvers can handle disjunctive rules in a sound and efficient manner. We first show that a large class of disjunctive FASP programs called the self-reinforcing cycle-free (SRCF) programs can be polynomially reduced to normal FASP programs. We then introduce a general method for solving disjunctive FASP programs, which combines the proposed reduction with the use of mixed integer programming for minimality checking. We also report the result of the experimental benchmark of this method.

Compacting Boolean Formulae for Inference in Probabilistic Logic Programming

ABSTRACT. Knowledge compilation converts Boolean formulae for which some inference tasks are computationally expensive into a representation where the same tasks are tractable. ProbLog is a state-of-the-art Probabilistic Logic Programming system that uses knowledge compilation to reduce the expensive probabilistic inference to an efficient weighted model counting. Motivated to solve larger problems with ProbLog we present an approach that optimizes Boolean formulae in order to speed-up knowledge compilation. We identify 7 Boolean subformulae patterns that can be used to re-write Boolean formulae. We implemented an algorithm with polynomial complexity which detects and compacts 6 of these patterns. We employ our method in the inference pipeline of ProbLog and conduct extensive experiments. We show that our compaction method improves knowledge compilation and consecutively the overall inference performance. Furthermore, using compaction reduces the number of time outs, allowing us to solve previously unsolvable problems.