View: session overviewtalk overviewside by side with other conferences

09:00 | On Syntactic Forgetting under Uniform Equivalence PRESENTER: Matthias Knorr ABSTRACT. Forgetting in Answer Set Programming (ASP) aims at reducing the language of a logic program without affecting the consequences over the remaining language. It has recently gained interest in the context of modular ASP where it allows simplifying a program of a module, making it more declarative, by omitting auxiliary atoms or hiding certain atoms/parts of the program not to be disclosed. Unlike for arbitrary programs, it has been shown that forgetting for modular ASP can always be applied, for input, output and hidden atoms, and preserve all dependencies over the remaining language (in line with uniform equivalence). However, the definition of the result is based solely on a semantic characterization in terms of HT-models. Thus, computing an actual result is a complicated process and the result commonly bears no resemblance to the original program, i.e., we are lacking a corresponding syntactic operator. In this paper, we show that there is no forgetting operator that preserves uniform equivalence (modulo the forgotten atoms) between the given program and its forgetting result by only manipulating the rules of the original program that contain the atoms to be forgotten. We then present a forgetting operator that preserves uniform equivalence and is syntactic whenever this is suitable. We also introduce a special class of programs, where syntactic forgetting is always possible, and as a complementary result, establish it as the largest known class where forgetting while preserving all dependencies is always possible. |

09:15 | PRESENTER: Fabrizio Riguzzi ABSTRACT. In Probabilistic Abductive Logic Programming we are given a probabilistic logic program, a set of abducible facts, and a set of constraints. Inference in probabilistic abductive logic programs aims to find a subset of the abducible facts that is compatible with the constraints and that maximizes the joint probability of the query and the constraints. In this paper, we extend the PITA reasoner with an algorithm to perform abduction on probabilistic abductive logic programs exploiting Binary Decision Diagrams. Tests on several synthetic datasets show the effectiveness of our approach. |

09:30 | DeepStochLog: Neural Stochastic Logic Programming PRESENTER: Giuseppe Marra ABSTRACT. Recent advances in neural-symbolic learning, such as DeepProbLog, extend probabilistic logic programs with neural predicates. Like graphical models, these probabilistic logic programs define a probability distribution over possible worlds, for which inference is computationally hard. We propose DeepStochLog, an alternative neural-symbolic framework based on stochastic definite clause grammars, a kind of stochastic logic program. More specifically, we introduce neural grammar rules into stochastic definite clause grammars to create a framework that can be trained end-to-end. We show that inference and learning in neural stochastic logic programming scale much better than for neural probabilistic logic programs. Furthermore, the experimental evaluation shows that DeepStochLog achieves state-of-the-art results on challenging neural-symbolic learning tasks. |

09:45 | Transforming Gringo Rules into Formulas in a Natural Way ABSTRACT. Research on the input language of the ASP grounder gringo uses a translation that converts rules in that language into first-order formulas. That translation often transforms short rules into formulas that are syntactically complex. In this note we identify a class of rules that can be transformed into formulas in a simpler, more natural way. The new translation contributes to our understanding of the relationship between the language of gringo and first-order languages. |

10:00 | PRESENTER: Nicolas Schwind ABSTRACT. Belief revision aims at incorporating a new piece of information into the beliefs of an agent. This process can be modelled by the standard AGM/KM rationality postulates. However, these postulates suppose a classical setting and do not govern the case where (some parts of) the beliefs of the agent are not consistent. In this work we discuss how to adapt these postulates when the underlying logic is Priest’s Logic of Paradox, in order to model a rational change, while being a conservative extension of AGM/KM belief revision. This paper is a summary of [5]. |

10:15 | Model Reconciliation in Logic Programs PRESENTER: Tran Cao Son ABSTRACT. Inspired by recent research in explainable planning, we investigate the model reconciliation problem between two logic programs \pi a and \pi h, which represent the knowledge bases of an agent and a human, respectively. Given \pi a, \pi h, and a query q such that \pi a entails q and \pi h does not entail q (or \pi a does not entail q and h entails q), the model reconciliation problem focuses on the question of how to modify \pi h, by adding \epsilon+⊆\pi a to \pi h and removing \epsilon−⊆\pi h from \pi h such that the resulting program \pî h=(\pi h∖\epsilon−)∪\epsilon+ has an answer set containing q (or has no answer set containing q). The pair (\epsilon+,\epsilon−) is referred to as a solution for the model reconciliation problem (\pi a,\pi h,q) (or (\pi a,\pi h,¬q)). We prove that, for a reasonable selected set of rules \epsilon+⊆\pi a there exists a way to modify \pi h such that \pî h is guaranteed to credulously entail q (or skeptically entail ¬q). Given that there are potentially several solutions, we discuss different characterizations of solutions and algorithms for computing solutions for model reconciliation problems. |

11:00 | Strong Equivalence of Logic Programs with Ordered Disjunction: a Logical Perspective PRESENTER: Panos Rondogiannis ABSTRACT. Logic Programs with Ordered Disjunction (LPODs) extend classical logic programs with the capability of expressing preferential disjunctions in the heads of program rules. The initial semantics of LPODs (Brewka 2002; Brewka et al. 2004), although simple and quite intuitive, is not purely model-theoretic. A consequence of this is that certain properties of programs appear non-trivial to formalize in purely logical terms. An example of this state of affairs is the characterization of the notion of strong equivalence for LPODs (Faber et al. 2008). Although the results of (Faber et al. 2008) are accurately developed, they fall short of characterizing strong equivalence of LPODs as logical equivalence in some specific logic. This comes in sharp contrast with the well-known characterization of strong equivalence for classical logic programs, which, as proved in (Lifschitz et al. 2001), coincides with logical equivalence in the logic of here-and-there. In this paper we obtain a purely logical characterization of strong equivalence of LPODs as logical equivalence in a four-valued logic. Moreover, we provide a new proof of the coNP-completeness of strong equivalence for LPODs, which has an interest in its own right since it relies on the special structure of such programs. Our results are based on the recent logical semantics of LPODs introduced in (Charalambidis et al. 2021), a fact which we believe indicates that this new semantics may prove to be a useful tool in the further study of LPODs. |

11:22 | Strong Equivalence of Logic Programs with Counting ABSTRACT. Under some conditions, a rule in the input language of the ASP grounder gringo can be transformed into a first-order sentence that has the same meaning under the stable model semantics. Such translations can be used for proving strong equivalence of logic programs and for verifying their correctness with respect to specifications expressed by first-order formulas. This paper defines a translation of this kind that is applicable to some rules containing arithmetic operations and aggregate expressions. |

11:44 | Analyzing Semantics of Aggregate Answer Set Programming Using Approximation Fixpoint Theory PRESENTER: Linde Vanbesien ABSTRACT. Aggregates provide a concise way to express complex knowledge. The problem of selecting an appropriate formalisation of aggregates for answer set programming (ASP) remains unsettled. This paper revisits the problem from the viewpoint of Approximation Fixpoint Theory (AFT). We introduce an AFT formalisation equivalent with the Gelfond-Lifschitz reduct for basic ASP programs and we extend it to handle aggregates. We analyse how existing approaches relate to our framework. We hope this work sheds some new light on the issue of a proper formalisation of aggregates. |

12:06 | PRESENTER: Francesco Ricca ABSTRACT. Answer Set Programming with Quantifiers ASP(Q) is a recent extension of Answer Set Programming (ASP) that allows one to model problems from the entire polynomial hierarchy. Earlier work focused on demonstrating modeling capabilities of ASP(Q). However, further evidence of modeling efficacy of \QASP is needed and, even more importantly, effective solvers for ASP(Q) must be developed. In this paper, we address both these areas. First, we provide elegant quantified ASP encodings for additional relevant AI problems, such as \emph{Argumentation Coherence} ($\Pi_2^P$-complete) and \emph{Semi-stable Semantics for ASP} ($\Sigma_2^P$-complete). Second, we provide a modular ASP(Q) solver that translates a quantified ASP program together with a given data instance into a QBF to be solved by any QBF solver. We evaluate the performance of our solver on several instances and with different back-end QBF solvers, demonstrating its efficacy as a tool for rapid modeling and solving of complex combinatorial problems. |

Lunch will be held in Taub lobby (CP, LICS, ICLP) and in The Grand Water Research Institute (KR, FSCD, SAT).

14:00 | PRESENTER: Bart Bogaerts ABSTRACT. Justification theory is an abstract unifying formalism that captures semantics of various non-monotonic logics. One intriguing problem that has received significant attention is the consistency problem: under which conditions are justification for a fact and justifications for its negation suitably related. Two variants of justification theory exist: one in which justifications are trees and one in which they are graphs. In this work we resolve the consistency problem once and for all for the tree-like setting by showing that all reasonable tree-like justification systems are consistent. |

14:22 | PRESENTER: Bart Bogaerts ABSTRACT. Justification theory is a general framework for the definition of semantics of rule-based languages that has a high explanatory potential. Nested justification systems, first introduced by Denecker et al. (2015), allow for the composition of justification systems. This notion of nesting thus enables the modular definition of semantics of rule-based languages, and increases the representational capacities of justification theory. As we show in this paper, the original semantics for nested justification systems lead to the loss of information relevant for explanations. In view of this problem, we provide an alternative characterization of semantics of nested justification systems and show that this characterization is equivalent to the original semantics. Furthermore, we show how nested justification systems allow representing fixpoint definitions (Hou and Denecker 2009). |

14:44 | PRESENTER: Fabrizio Riguzzi ABSTRACT. Hybrid Knowledge Bases based on Lifschitz's logic of Minimal Knowledge with Negation as Failure are a successful approach to combine the expressivity of Description Logics and Logic Programming in a single language. Their syntax, defined by Motik and Rosati, disallows function symbols. In order to define a well-founded semantics for MKNF HKBs, Knorr et al. define a partition of the modal atoms occurring in it, called the alternating fixpoint partition. In this paper, we propose an iterated fixpoint semantics for HKBs with function symbols. We prove that our semantics extends Knorr et al.'s, in that, for a function-free HKBs, it coincides with its alternating fixpoint partition. The proposed semantics lends itself well to a probabilistic extension with a distribution semantic approach, which is the subject of future work. |

15:06 | A Fixpoint Characterization of Three-Valued Disjunctive Hybrid MKNF Knowledge Bases PRESENTER: Spencer Killen ABSTRACT. The logic of hybrid MKNF (minimal knowledge and negation as failure) is a powerful knowledge representation language that elegantly pairs ASP (answer set programming) with ontologies. One may handle a disjunctive rule by inducing a collection of normal rules; each with the same body and a single atom in its head. In this work, we refer to a set of such normal rules as a head-cut. The question arises as to whether the semantics of disjunctive hybrid MKNF knowledge bases can be characterized using fixpoint constructions with head-cuts. Earlier, we have shown that head-cuts can be paired with fixpoint operators to capture the two-valued MKNF models of disjunctive hybrid MKNF knowledge bases. Three-valued semantics extend two-valued semantics with the ability to express partial information. In this work, we present a fixpoint construction that leverages head-cuts using an operator that iteratively captures three-valued models of hybrid MKNF knowledge bases with disjunctive rules. This characterization also captures partial models of disjunctive logic programs since a program can be expressed as a disjunctive hybrid MKNF knowledge base with an empty ontology. We elaborate on a relationship between this characterization and approximators in AFT (approximation fixpoint theory) for normal hybrid MKNF knowledge bases. |

16:00 | A Neuro-Symbolic ASP Pipeline for Visual Question Answering PRESENTER: Nelson Higuera ABSTRACT. We present a neuro-symbolic visual question answering (VQA) pipeline for CLEVR, which is a well-known dataset that consists of pictures showing scenes with objects and questions related to them. Our pipeline covers (i) training neural networks for object classification and bounding-box prediction of the CLEVR scenes, (ii) statistical analysis on the distribution of prediction values of the neural networks to determine a threshold for high-confidence predictions, and (iii) a translation of CLEVR questions and network predictions that pass confidence thresholds into logic programs so that we can compute the answers using an ASP solver. By exploiting choice rules, we consider deterministic and non-deterministic scene encodings. Our experiments show that the non-deterministic scene encoding achieves good results even if the neural networks are trained rather poorly in comparison with the deterministic approach. This is important for building robust VQA systems if network predictions are less-than perfect. Furthermore, we show that restricting non-determinism to reasonable choices allows for more efficient implementations in comparison with related neuro-symbolic approaches without loosing much accuracy. |

16:22 | Knowledge Authoring with Factual English PRESENTER: Paul Fodor ABSTRACT. Knowledge representation and reasoning (KRR) systems represent knowledge as collections of facts and rules. Like databases, KRR systems contain information about domains of human activities like enterprises, science, and business. In addition to the capabilities of databases, KRRs can represent complex concepts and relations, and they can query and manipulate information in much more sophisticated ways. Unfortunately, the KRR technology has been hindered by the fact that specifying the requisite knowledge requires skills that most domain experts do not have, while knowledge engineering is a rare skill. One solution could be to extract knowledge from English text, and a number of works have attempted to do so (OpenSesame, etc.). Unfortunately, at present, extraction of logical facts from unrestricted natural language is still too inaccurate to be used for reasoning, while restricting the grammar of the language (so-called controlled natural language, or CNL) is hard for the users to learn and use. Nevertheless, some recent CNL-based approaches, such as the Knowledge Authoring Logic Machine (KALM), have shown to have very high accuracy compared to others, and a natural question is to what extent the CNL restrictions can be lifted. In this paper, we are trying to address this issue by transplanting the KALM framework to a neural natural language parser, Multi-Stanza. Here we limit our attention to authoring facts and queries and therefore our focus is what we call factual English statements. Authoring other types of knowledge, such as rules, will be considered in our followup work. As it turns out, neural network based parsers have problems of their own and the mistakes they make range from part-of-speech tagging to lemmatization to dependency errors. We present a number of techniques for combating these problems and test the new system, KALM^FL (i.e., KALM for factual language) on a number of benchmarks, which show that KALM^FL achieves correctness in excess of 95%. |

16:44 | PRESENTER: Kristian Reale ABSTRACT. In the last few years, we have been witnessing the spread of computing devices getting smaller and smaller (e.g., Smartphones, Smart Devices, Raspberry, etc.), and the production and availability of data getting bigger and bigger. In this work we introduce DLV Large Scale (DLV-LS), a framework based on Answer Set Programming (ASP) for performing declarative-based reasoning tasks over data-intensive applications, possibly on Smart Devices. The framework encompasses DLV Mobile Edition(DLV-ME), an ASP based solver for Android systems and Raspberry devices, and DLV Enterprise Edition (DLV-EE), an ASP-based platform, accessible by REST interfaces, for large-scale reasoning over Big Data, classical relational database systems, and NoSQL databases. DLV-LS enables Smart Devices to both locally perform reasoning over data generated by their own sensors and properly interact with DLV-EE when more computational power is needed for harder tasks, possibly over bigger centralized data. We present also a real-world application of DLV-LS; the use case consists of a tourist navigator that calculates the best routes and optimizes a tour of a tourist under custom-defined time constraints. |

16:59 | Modeling and Reasoning in Event Calculus using Goal-Directed Constraint Answer Set Programming (Extended Abstract) PRESENTER: Joaquin Arias ABSTRACT. Automated commonsense reasoning is essential for building human-like AI systems featuring, for example, explainable AI. Event Calculus (EC) is a family of formalisms that model common-sense reasoning with a sound, logical basis. Previous attempts to mechanize reasoning using EC faced difficulties in the treatment of the continuous change in dense domains (e.g., time and other physical quantities), constraints among variables, default negation, and the uniform application of different inference methods, among others. We propose the use of s(CASP), a query-driven, top-down execution model for Predicate Answer Set Programming with Constraints, to model and reason using EC. We show how EC scenarios can be elegantly modeled in s(CASP) and how its expressiveness makes it possible to perform deductive and abductive reasoning tasks in domains featuring, for example, constraints involving dense time and fluents. |

17:14 | A Multi-shot ASP Encoding for the Aircraft Routing and Maintenance Planning Problem PRESENTER: Pierre Tassel ABSTRACT. Aircraft routing and maintenance planning are integral parts of the airline scheduling process. We model and solve these combinatorial optimization problems with Answer Set Programming (ASP), contrasting traditional single-shot ASP solving methods to multi-shot solving techniques geared to the rapid discovery of near-optimal solutions to sub-problems of gradually increasing granularity. |

pickup at 18:00 from the Technion (Tour at Haifa, no food will be provided)