View: session overviewtalk overviewside by side with other conferences
09:15 | Treewidth-aware Reductions of Normal ASP to SAT– Is Normal ASP Harder than SAT after All? ABSTRACT. Answer Set Programming (ASP) is a paradigm and problem modeling and solving toolkit for KR that is often invoked. There are plenty of results dedicated to studying the hardness of (fragments of) ASP. So far, these studies resulted in characterizations in terms of computational complexity as well as in fine-grained insights presented in form of dichotomy-style results, lower bounds when translating to other formalisms like Boolean satisfiability (SAT), and even detailed parameterized complexity landscapes. A quite generic and prominent parameter in parameterized complexity originating from graph theory is the so-called treewidth, which in a sense captures structural density of a program. Recently, there was an increase in the number of treewidth-based solvers related to SAT. While there exist several translations from (normal) ASP to SAT, yet there is no reduction preserving treewidth or at least being aware of the treewidth increase. This paper deals with a novel reduction from normal ASP to SAT that is aware of the treewidth, and guarantees that a slight increase of treewidth is indeed sufficient. Then, we also establish that when considering treewidth, already the fragment of normal ASP is slightly harder than SAT (under reasonable assumptions in computational complexity). This also confirms that our reduction probably cannot be significantly improved and that the slight increase of treewidth is unavoidable. |
09:30 | Inference and Learning with Model Uncertainty in Probabilistic Logic Programs PRESENTER: Victor Verreet ABSTRACT. An issue that has received limited attention in probabilistic logic programming (PLP) is the modeling of so-called epistemic uncertainty, the uncertainty about the model itself. Accurately quantifying this uncertainty is paramount to robust inference, learning and ultimately decision making. We introduce BetaProbLog, a PLP language that can model epistemic uncertainty. BetaProbLog has sound semantics, an effective inference algorithm that combines Monte Carlo techniques with knowledge compilation, and a parameter learning algorithm. |
09:45 | Tractable Reasoning using Logic Programs with Intensional Concepts PRESENTER: Ricardo Gonçalves ABSTRACT. Recent developments triggered by initiatives such as the Semantic Web, Linked Open Data, the Web of Things, and geographic information systems resulted in the wide and increasing availability of machine-processable data and knowledge in the form of data streams and knowledge bases. Applications building on such knowledge require reasoning with modal and intensional concepts, such as time, space, and obligations, that are defeasible. E.g., in the presence of data streams, conclusions may have to be revised due to newly arriving information. The current literature features a variety of domain-specific formalisms that allow for defeasible reasoning using specific intensional concepts. However, many of these formalisms are computationally intractable and limited to one of the mentioned application domains. In this paper, we define a general method for obtaining defeasible inferences over intensional concepts, and we study conditions under which these inferences are computable in polynomial time. |
10:00 | Towards Dynamic Consistency Checking in Goal-directed Predicate Answer Set Programming PRESENTER: Joaquin Arias ABSTRACT. Goal-directed evaluation of Answer Set Programs is gaining traction thanks to its amenability to create AI systems that can, due to the evaluation mechanism used, generate explanations and justifications. s(CASP) is one of these systems and has been already used to write reasoning systems in several fields. It provides enhanced expressiveness w.r.t. other ASP systems due to its ability to use constraints, data structures, and unbound variables natively. However, the performance of existing s(CASP) implementations is not on par with other ASP systems: model consistency is checked once models have been generated, in keeping with the generate-and-test paradigm. In this work, we present a variation of the top-down evaluation strategy, termed Dynamic Consistency Checking, which interleaves model generation and consistency checking. This makes it possible to determine when a literal is not compatible with the denials associated to the global constraints in the program, prune the current execution branch, and choose a different alternative. This strategy is specially (but not exclusively) relevant in problems with a high combinatorial component. We have experimentally observed speedups of up to 90x w.r.t. the standard versions of s(CASP). |
10:15 | ApproxASP – A Scalable Approximate Answer Set Counter PRESENTER: Mohimenul Kabir ABSTRACT. Answer Set Programming (ASP) is a framework in artificial intelligence and knowledge representation for declarative modeling and problem solving. Modern ASP solvers focus on the computation or enumeration of answer sets. However, a variety of probabilistic applications in reasoning or logic programming require counting answer sets. While counting can be done by enumeration, simple enumeration becomes immediately infeasible if the number of solutions is high. On the other hand, approaches to exact counting are of high worst-case complexity. In fact, in propositional model counting, exact counting becomes impractical. In this work, we present a scalable approach to approximate counting for ASP. Our approach is based on systematically adding parity (XOR) constraints to ASP programs, which divide the search space. We prove that adding random XOR constraints partitions the answer sets of an ASP program. In practice, we use a Gaussian elimination based approach by lifting ideas from SAT to ASP and integrate it into a state of the art ASP solver, which we call ApproxASP. Finally, our experimental evaluation shows the scalability of our approach over existing ASP systems. |
Lunch will be held in Taub lobby (CP, LICS, ICLP) and in The Grand Water Research Institute (KR, FSCD, SAT).
Two Languages, One System:Tightly Connecting XSB Prolog and Python
Abstract: Python, ranked first on the May 2022 Tiobe index, is a hugely popular language, heavily used in machine learning and other applications. Prolog, ranked twenty-first the May 2022 Tiobe index, while less popular has important reasoning and knowledge representation capabilities, particularly since modern Prologs support constraint-based reasoning, tabling-based deduction, and probabilistic inference. Despite their differences, Prolog and Python share important commonalities. First, both Prolog and CPython (the standard Python implementation) are written in C with well-developed interfaces to other C programs. In addition, both languages are dynamically typed with data structures that are recursively generated in just a few ways. Infact, nearly all core data structures of the two languages can be efficiently bi-translated, leading to a tight connection of the two systems. This talk presents the design, experience, and implications of such a connection using XSB Prolog version 5.0. The connection for XSB to call Python has led to XSB orchestrating commercial projects using interfaces to Elastic search, dense vector storage, nlp systems, Google maps, and to a 14.6 billion triple Wikidata graph. The connection for Python to call XSB allows XSB to be imported as any other Python module so that XSB can easily be invoked from Jupyter notebooks or other graphical interfaces. On a more speculative level, the talk mentions how this work might be leveraged for research in neuro-symbolic learning, natural language processing and cross-language type inference.
15:00 | PRESENTER: Fabrizio Riguzzi ABSTRACT. In the last years, the Hyper-parameter Optimization (HPO) research field has gained more and more attention. Many works have focused on finding the best combination of the network’s hyperparameters (HPs) or architecture. The state-of-the-art algorithm in terms of HPO is Bayesian Optimization (BO). This is because it keeps track of past results obtained during the optimization and uses this experience to build a probabilistic model mapping HPs to a probability density of the objective function. BO builds a surrogate probabilistic model of the objective function, finds the HPs values that perform best on the surrogate model and updates it with new results. In this work, a system was developed, called Symbolic DNN-Tuner which logically evaluates the results obtained from the training and the validation phase and, by applying symbolic tuning rules, fixes the network architecture, and its HPs, therefore improving performance. Symbolic DNN-Tuner improve BO applied to Deep Neural Network (DNN) by adding an analysis of the results of the network on training and validation sets. This analysis is performed by exploiting rule-based programming, and in particular by using Probabilistic Logic Programming (PLP). |
15:15 | Graph-based Interpretation of Normal Logic Programs PRESENTER: Gopal Gupta ABSTRACT. In this paper we present a dependency graph-based methods for computing the various semantics of normal logic programs. Our method employs conjunction nodes to unambiguously represent the dependency graph of normal logic programs. The dependency graph can be transformed suitably in a semantics preserving manner and retranslated into an equivalent normal logic program. This transformed normal logic program can be augmented with a few rules written in answer set programming (ASP), and then the CLINGO system used to compute its answer sets. Depending on how these additional rules are coded in ASP, one can compute models of the original normal logic program under the well-founded semantics, the stable model semantics, or the co-stable model semantics. In each case, justification for each atom in the model is also generated. We report on the implementation of our method as well as its performance evaluation. |
16:00 | PRESENTER: David Geleßus ABSTRACT. Even though the core of the Prolog programming language has been standardized by ISO since 1995, it remains difficult to write complex Prolog programs that can run unmodified on multiple Prolog implementations. Indeed, implementations sometimes deviate from the ISO standard and the standard itself fails to cover many features that are essential in practice. Most Prolog applications thus have to rely on non-standard features, often making them dependent on one particular Prolog implementation and incompatible with others. We examine one such Prolog application: ProB, which has been developed for over 20 years in SICStus Prolog. The article describes how we managed to refactor the codebase of ProB to also support SWI-Prolog, with the goal of verifying ProB’s results using two independent toolchains. This required a multitude of adjustments, ranging from extending the SICStus emulation in SWI-Prolog on to better modularizing the monolithic ProB codebase. We also describe notable compatibility issues and other differences that we encountered in the process, and how we were able to deal with them with few major code changes. |
16:22 | Building Information Modeling using Constraint Logic Programming PRESENTER: Joaquin Arias ABSTRACT. Building Information Modeling (BIM) produces three-dimensional object-oriented models of buildings combining the geometrical information with a wide range of properties about materials, products, safety, and so on. BIM is slowly but inevitably revolutionizing the architecture, engineering, and construction (AEC) industry. Buildings need to be compliant with regulations about stability, safety, and environmental impact. Manual compliance checking is tedious and error-prone, and amending flaws discovered only at construction time causes huge additional costs and delays. Several tools can check BIM models for conformance with rules/guidelines. For example, Singapore’s CORENET e-Submission System checks fire safety. But since the current BIM exchange format only contains basic information of building objects, a separate, ad-hoc model pre-processing is required to determine, e.g., evacuation routes. Moreover, they face difficulties in adapting existing built-in rules and/or adding new ones (to cater for building regulations, that can vary not only among countries but also among parts of the same city), if at all possible. We propose the use of logic-based executable formalisms (CLP and Constraint ASP) to couple BIM models with advanced knowledge representation and reasoning capabilities. Previous experience shows that such formalisms can be used to uniformly capture and reason with knowledge (including ambiguity) in a large variety of domains. Additionally, incorporating checking within design tools makes it possible to ensure that models are rule-compliant at every step. This also prevents erroneous designs from having to be (partially) redone, which is also costly and burdensome. To validate our proposal, we implemented a preliminary reasoner under CLP(Q/R) and ASP with constraints and evaluated it with several BIM models. |
16:44 | ABSTRACT. Graph Neural Networks share with Logic Programming several key relational inference mechanisms. The datasets on which they are trained and evaluated can be seen as database facts containing ground terms. This makes possible modeling their inference mechanisms with equivalent logic programs, to better understand not just how they propagate information between the entities involved in the machine learning process but also to infer limits on what can be learned from a given dataset and how well that might generalize to unseen test data. This leads us to the key idea of this paper: modeling with the help of a logic program the information flows involved in learning to infer from the link structure of a graph and the information content of its nodes properties of new nodes, given their known connections to nodes with possibly similar properties. The problem is known as {\em graph node property prediction} and our approach will consist in emulating with help of a Prolog program the key information propagation steps of a Graph Neural Network's training and inference stages. We test our a approach on the {\em ogbn-arxiv} node property inference benchmark. To infer class labels for nodes representing papers in a citation network, we distill the dependency trees of the text associated to each node into directed acyclic graphs that we encode as ground Prolog terms. Together with the set of their references to other papers, they become facts in a database on which we reason with help of a Prolog program that mimics the information propagation in graph neural networks predicting node properties. In the process, we invent ground term similarity relations that help infer labels in the test set by propagating node properties from similar nodes in the training set and we evaluate their effectiveness in comparison with that of the graph's link structure. Finally, we implement explanation generators that unveil performance upper bounds inherent to the dataset. As a practical outcome, we obtain a logic program, that, when seen as machine learning algorithm, performs close to the state of the art on the node property prediction benchmark. |
17:06 | ABSTRACT. With help of a compact Prolog-based theorem prover for Intuitionistic Propositional Logic, we synthesize minimal assumptions under which a given formula formula becomes a theorem. After applying our synthesis algorithm to cover basic abductive reasoning mechanisms, we synthesize conjunctions of literals that mimic rows of truth tables in classical or intermediate logics and we abduce conditional hypotheses that turn the theorems of classical or intermediate logics into theorems in intuitionistic logic. One step further, we generalize our abductive reasoning mechanism to synthesize more expressive sequent premises using a minimal set of canonical formulas, to which arbitrary formulas in the calculus can be reduced while preserving their provability. Organized as a self-contained literate Prolog program, the paper supports interactive exploration of its content and ensures full replicability of our results. |