View: session overviewtalk overview
09:30 | Abstraction for Non-Ground Answer Set Programs PRESENTER: Peter Schüller ABSTRACT. We address the issue of abstraction, a widely used notion to simplify problems, in the context of Answer Set Programming (ASP), which is a highly expressive formalism and a convenient tool for declarative problem solving. We introduce a method to automatically abstract non-ground ASP programs given an abstraction over the domain, which ensures that each original answer set is mapped to some abstract answer set. We discuss abstraction possibilities on several examples and show the use of abstraction to gain insight into problem instances, e.g., domain details irrelevant for problem solving; this makes abstraction attractive for getting to the essence of the problem. We also provide a tool implementing automatic abstraction from an input program. |
09:50 | Characterising Relativised Strong Equivalence with Projection for Non-Ground Answer-Set Programs PRESENTER: Tobias Geibinger ABSTRACT. Starting with the seminal work on strong equivalence by Lifschitz, Pearce, and Valverde, many different advanced notions of program equivalance have been studied in the area of answer-set programming (ASP). In particular, relativised strong equivalence with projection has been introduced as a generalisation of strong equivalence by parametrising, on the one hand, the alphabet of the context programs used for checking program equivalence as well as, on the other hand, allowing the filtering of auxiliary atoms. Like many other advanced equivalence notions, it was introduced originally for propositional programs, along with model-theoretic concepts providing characterisations when equivalence between two programs hold. In this paper, we extend these concepts and characterisations to the general case of non-ground programs. We carry out our investigations within a general framework for specifying advanced notions of program equivalence introduced in previous work. |
10:10 | Epistemic Answer Set Programming ABSTRACT. This paper introduces an epistemic extension of answer set programming (ASP) called epistemic ASP (E-ASP). Then, it compares E-ASP with existing approaches, showing the advantages of the new semantics: compared to Gelfond’s epistemic specifications (ES), E-ASP defines a simpler, but sufficiently strong language. Its epistemic view semantics is a natural generalisation of ASP’s original answer set semantics, so it allows for ASP’s previous language extensions. Moreover, compared to all semantics for ES, epistemic view semantics facilitates understanding of the intuitive meaning of epistemic logic programs, and solves unintended results discussed in the literature, especially for programs including constraints. Finally, we discuss that our approach is compatible with epistemic splitting property, and so we provide a formal support for our approach. |
10:30 | Algorithm Selection for Paracoherent Answer Set Computation ABSTRACT. Answer Set Programming (ASP) is a well-established AI formalism rooted in nonmonotonic reasoning. Paracoherent semantics for ASP have been proposed to derive useful conclusions also in the absence of answer sets caused by cyclic default negation. Recently, several different algorithms have been proposed to implement them, but no algorithm is always preferable to the others in all instances. In this paper, we apply algorithm selection techniques to devise a more efficient paracoherent answer set solver combining existing algorithms. The effectiveness of the approach is demonstrated empirically running our system on existing benchmarks. |
10:40 | The Weak Completion Semantics can Model Inferences of Individual Human Reasoner PRESENTER: Marco Ragni ABSTRACT. The weak completion semantics (WCS) based on three-valued Lukasiewicz logic demonstrated successfully to adequately model human reasoning for different domains. Among the many experimental paradigms in cognitive psychology the Wason Selection Task is a core problem with over 200 publications demonstrating key features of the systematic deviation of humans from classical logical reasoning. Previous attempts were able to model general response patterns, but not individual responses of participants. This paper provides a novel generalization of the weak completion semantics by using two additional principles including abduction and contraposition: This extension can model the four canonical cases of the Wason Selection Task for the abstract, everyday, and deontic problem domain. Finally, a quantitative comparison between the WCS predictions of the extended model and the individual participants' responses in the three problem domains is performed -- demonstrating the power of the weak completion semantics to adequately model human reasoning on the individual level. Implications for using logics to describe human reasoning are discussed. |
11:10 | A Logic-Based Question Answering System for Cultural Heritage PRESENTER: Bernardo Cuteri ABSTRACT. Question Answering (QA) systems attempt to find direct answers to user questions posed in natural language. This work presents a QA system for the closed domain of Cultural Heritage. Our solution gradually transforms input questions into queries that are executed on a CIDOC-compliant ontological knowledge base. Questions are processed by means of a rule-based syntactic classification module running an Answer Set Programming system. The proposed solution is being integrated into a fully-fledged commercial system developed within the PIUCULTURA project, funded by the Italian Ministry for Economic Development. |
11:30 | Uhura: An Authoring Tool for Specifying Answer-Set Programs using Controlled Natural Language PRESENTER: Tobias Kain ABSTRACT. In this paper, we introduce a tool, called Uhura, for developing answer-set programs by means of specifying problem descriptions in a controlled natural language which then are translated into answer-set rules. The tool is aimed for supporting users not familiar with answer-set programming (ASP), or logic-based approaches in general, for developing programs. Uhura is based on a new controlled natural language called PENG^U, which is in turn an adaption of PENG^ASP, a controlled natural language employed in the PENG ASP system, developed by Guy and Schwitter, for solving computational problems by translating PENG^ASP statements into answer-set programs. In contrast to PENG^ASP, PENG^U allows for a more natural translation into ASP rules and provides also a broader set of pre-defined sentence patterns. Uhura is implemented in Java and employs DLV as the underlying ASP solver. |
11:50 | Chain Answer Sets for Logic Programs with Generalized Atoms PRESENTER: Wolfgang Faber ABSTRACT. Answer Set Programming (ASP) has seen several extensions by generalizing the notion of atom used in these programs, for example dl-atoms, aggregate atoms, HEX atoms, generalized quantifiers, and abstract constraints, referred to collectively as generalized atoms in this paper. The idea common to all of these constructs is that their satisfaction depends on the truth values of a set of (non-generalized) atoms, rather than the truth value of a single (non-generalized) atom. In a previous work, it was argued that for some of the more intricate generalized atoms, the previously suggested semantics provide unintuitive results, and an alternative semantics called supportedly stable was suggested. Unfortunately, this semantics had a few issues on its own and also did not have a particularly natural definition. In this paper, we present a family of semantics called Chain Answer Sets, which has a simple, but somewhat unusual definition. We show several properties of the new semantics, including the computational complexity of the associated reasoning tasks. |
12:10 | The Hexlite Solver ABSTRACT. Hexlite is a lightweight solver for the hex formalism which integrates Answer Set Programming (ASP) with external computations. The main goal of hexlite is efficiency and simplicity, both in implemen- tation as well as in installation of the system. We define the Pragmatic hex Fragment which permits to partition external computations into two kinds: those that can be evaluated during the program instantia- tion phase, and those that need to be evaluated during the answer set search phase. hexlite is written in python and suitable for evaluating this fragment with external computations that are realized in python. Most performance-critical tasks are delegated to the python module of clingo. We demonstrate that the Pragmatic hex Fragment is sufficient for many use cases and that it permits hexlite to have superior perfor- mance compared to the dlvhex system in relevant application scenarios. |
12:30 | Extending Bell Numbers for Parsimonious Chase Estimation PRESENTER: Cinzia Marte ABSTRACT. Ontology-Based Query Answering (OBQA) consists in querying databases by taking ontological knowledge into account. We focus on a logical framework based on existential rules or tuple generating dependencies (TGDs), also known as Datalog±, which collects the basic decidable classes of TGDs, and generalizes several ontology specification languages, such as Description Logics. A fundamental notion to find certain answers to a query is the chase. This tool has been widely used to deal with different problems in databases, as it has the fundamental property of constructing a universal model. Recently, the so-called ''parsimonious'' chase procedure has been introduced. For some classes, it is sound and complete, and the termination is always guaranteed. However, no precise bound has been provided so far. To this end, we exploit the Bell number definition to count the exact maximal number of atoms generating by the parsimonious chase procedure. |
12:40 | Memory saving evaluation plans for Datalog ABSTRACT. Ontology-based query answering (OBQA), without a doubt, represents one of the fundamental reasoning services in Semantic Web applications. Specifically, OBQA is the task of evaluating a (conjunctive) query over a knowledge base (KB) consisting of an extensional dataset paired with an ontology. A number of effective practical approaches proposed in the literature rewrite the query and the ontology into an equivalent Datalog program. In case of very large datasets, however, classical approaches for evaluating such Datalog programs tend to be extremely memory consuming, and may even slow down the computation. In this paper, we explain how to compute a memory saving evaluation plan consisting of an optimal indexing schema for the dataset together with a profitable body-ordering for each Datalog rule. To evaluate the quality of our approach, we compared our plans with the classical approach used by DLV over the LUBM benchmark. The results confirmed the memory usage can be significantly reduced without paying in efficiency. |
Map directions: here.
14:30 | Typed meta-interpretive learning of logic programs PRESENTER: Rolf Morel ABSTRACT. Meta-interpretive learning (MIL) is a form of inductive logic programming that learns logic programs from background knowledge and examples. We claim that adding types to MIL can improve learning performance. We show that type checking can reduce the MIL hypothesis space by a cubic factor. We introduce two typed MIL systems: the Prolog-based Metagol_T and the ASP-based HEXMIL_T. Both systems support polymorphic types and can infer the types of invented predicates. Our experimental results show that types can substantially reduce learning times. |
14:50 | Taking Defeasible Entailment beyond Rational Closure PRESENTER: Giovanni Casini ABSTRACT. In this paper we present an approach for extending the framework for defeasible entailment first presented by Kraus, Lehmann, and Magidor: the so-called KLM approach. Drawing on the properties for KLM, we first propose a class of basic defeasible entailment relations. We characterise this basic framework in three ways: (i) semantically, (ii) in terms of a class of properties, and (iii) in terms of ranks on statements in a knowledge base. We also provide an algorithm for computing the basic framework. These results are proved through various representation results. We then refine this framework by defining the class of rational defeasible entailment relations. This refined framework is also characterised in thee ways: semantically, in terms of a class of properties, and in terms of ranks on statements. We also provide an algorithm for computing the refined framework. Again, these results are proved through various representation results. We argue that the class of rational defeasible entailment relations - a strengthening of basic defeasible entailment which is itself a strengthening of the original KLM proposal - is worthy of the term rational in the sense that all of them can be viewed as appropriate forms of defeasible entailment. We show that the two well-known forms of defeasible entailment, rational closure and lexicographic closure, fall within our rational defeasible framework. We show that rational closure is the most conservative of the defeasible entailment relations within the framework (with respect to subset inclusion), but that there are forms of defeasible entailment within our framework that are more "adventurous" than lexicographic closure. |
15:10 | Explaining Actual Causation via Reasoning about Actions and Change PRESENTER: Marcello Balduccini ABSTRACT. An actual cause is an event responsible for bringing about a given outcome in a scenario. In practice, however, we often must reason back over the scenario's events to identify the most fitting cause. In this paper, we motivate this claim using well-known examples and present a novel framework for reasoning more deeply about actual causation. By leveraging techniques from Reasoning about Actions and Change, the approach supports reasoning over domains in which the evolution of the state of the world over time plays a critical role and enables one to identify and explain the circumstances that led to an outcome of interest. The framework leverages representations of scenarios and domain knowledge to identify detailed causal explanations that identify causing events and can further identify events that helped to ``set the stage'' for causes to occur. We utilize action language AL for defining the constructs of the framework. |
15:30 | Explaining actual causation in terms of possible causal processes PRESENTER: Bart Bogaerts ABSTRACT. Suppose we know the causal process that caused the actual world, would this help us to determine the causes of some observed fact? Of course! Formal causal models are frequently ambiguous, even when their solutions correctly reflect the possible causal worlds of the causal domain. Hence essential information to explain actual causation must be missing. What information could that be? We argue that what is often missing is information about causal mechanisms, their effects and what triggers and preempts them, and how they rig together in causal processes that create the world. In line with the method of formal empirical science, we make these concepts explicit in a formal language and framework. This results in a rich framework in which informal notions of actual causation can be unraveled and formalized and in which some difficult causal reasoning problems have elegant solutions. |
15:50 | SLD-Resolution Reduction of Second-Order Horn Fragments PRESENTER: Sophie Tourret ABSTRACT. We present the derivation reduction problem for SLD-resolution, the undecidable problem of finding a finite subset of a set of clauses from which the whole set can be derived using SLD-resolution. We study the reducibility of various fragments of second-order Horn logic with particular applications in Inductive Logic Programming. We also discuss how these results extend to standard resolution |
16:10 | ACUOS^2: A High-performance System for Modular ACU Generalization with Subtyping and Inheritance PRESENTER: Santiago Escobar ABSTRACT. Generalization in order-sorted theories with any combination of associativity (A), commutativity (C), and unity (U) algebraic axioms is finitary. However, existing tools for computing generalizers (also called “anti-unifiers”) of two typed structures in such theories do not currently scale to real-life size problems. This paper describes the ACUOS^2 system that achieves high performance when computing a complete and minimal set of least general generalizations in these theories. We discuss how it can be used to address articial intelligence (AI) problems that are representable as order-sorted ACU generalization, e.g. generalization in lists, trees, (multi-)sets, and typical hierarchical/structural relations. Experimental results are also given to demonstrate that ACUOS^2 greatly outperforms the predecessor tool ACUOS. |
16:20 | Advancements in Resource-driven Substructural Defeasible Logic PRESENTER: Francesco Olivieri ABSTRACT. Linear Logic and Defeasible Logic have been adopted to formalise different features relevant to agents: consumption of resources, and reasoning with exceptions. We propose a framework to combine sub-structural features, corresponding to the consumption of resources, with defeasibility aspects, and we discuss the design choices for the framework. |
16:50 | Systematic Generation of Conditional Knowledge Bases up to Renaming and Equivalence PRESENTER: Steven Kutsch ABSTRACT. A conditional of the form "If A then usually B" establishes a plausible connection between A and B, while still allowing for exceptions. A conditional knowledge base consists of a finite set of conditionals, inducing various nonmonotonic inference relations. Sets of knowledge bases are of interest for, e.g., experimenting with systems implementing conditional reasoning and for empirically evaluating them. In this paper, we present an approach for systematically generating knowledge bases over a given signature. The approach is minimal in the sense that no two knowledge bases are generated that can be transformed into each other by a syntactic renaming or that are elementwise equivalent. Furthermore, the approach is complete in the sense that, taking renamings and equivalences into account, every consistent knowledge base is generated. |
17:00 | Unifying Reasoning and Core-Guided Search for Maximum Satisfiability PRESENTER: Jeremias Berg ABSTRACT. Generalizing propositional satisfiability (SAT) to optimization, maximum satisfiability (MaxSAT) is gaining ground as an effective approach to Boolean optimization. A central algorithmic approach to maximum satisfiability solving is the so-called core-guided approach, providing a competitive approach to various real-world optimization problems. Furthermore, recent progress on preprocessing techniques is bringing in additional reasoning to MaxSAT solving. Towards realizing their combined potential, understanding formal underpinnings of interleavings of preprocessing-style reasoning and core-guided algorithm is important. It turns out that earlier proposed notions for establishing correctness of core-guided algorithms and preprocessing, respectively, are not enough for capturing correctness of interleavings of the techniques. We provide an in-depth analysis of these and related MaxSAT instance transformations, and propose correction set reducibility as a notion that captures inprocessing MaxSAT solving within a state-transition style abstract MaxSAT solving framework. Furthermore, by adapting the earlier proposed notion of RAT-based proofs capturing inprocessing SAT solving, we establish a general theorem of correctness for applications of SAT-based preprocessing techniques in MaxSAT. |
17:20 | Facets of Distribution Identities in Probabilistic Team Semantics ABSTRACT. We study probabilistic team semantics which is a semantical framework allowing the study of logical and probabilistic dependencies simultaneously. We examine and classify the expressive power of logical formalisms arising by different probabilistic atoms such as conditional independence and different variants of marginal distribution equivalences. We also relate the framework to the first-order theory of the reals and apply our methods to the open question on the complexity of the implication problem of conditional independence. |