View: session overviewtalk overviewside by side with other conferences
11:00 | PRESENTER: Arseny Skryagin ABSTRACT. The goal of combining the robustness of neural networks and the expressivity of symbolic methods has rekindled the interest in Neuro-Symbolic AI. One specifically interesting branch of research is deep probabilistic programming languages (DPPLs) which carry out probabilistic logical programming via the probability estimations of deep neural networks. However, recent SOTA DPPL approaches allow only for limited conditional probabilistic queries and do not offer the power of true joint probability estimation. In our work, we propose an easy integration of tractable probabilistic inference within a DPPL. To this end we introduce SLASH, a novel DPPL that consists of Neural-Probabilistic Predicates (NPPs) and a logical program, united via answer set programming. NPPs are a novel design principle allowing for the unification of all deep model types and combinations thereof to be represented as a single probabilistic predicate. In this context, we introduce a novel $+/-$ notation for answering various types of probabilistic queries by adjusting the atom notations of a predicate. We evaluate SLASH on the benchmark task of MNIST addition as well as novel tasks for DPPLs such as missing data prediction, generative learning and set prediction with state-of-the-art performance, thereby showing the effectiveness and generality of our method. |
11:30 | PRESENTER: David Tena Cucala ABSTRACT. Rule learning involves developing machine learning models that can be applied to a set of logical facts to predict additional facts, as well as providing methods for extracting from the learned model a set of logical rules that explain symbolically the model's predictions. Existing such approaches, however, do not describe formally the relationship between the model's predictions and the derivations of the extracted rules; rather, it is often claimed without justification that the extracted rules `approximate' or `explain' the model, and rule quality is evaluated by manual inspection. In this paper, we study the formal properties of Neural-LP--a prominent rule learning approach. We show that the rules extracted from Neural-LP models can be both unsound and incomplete: on the same input dataset, the extracted rules can derive facts not predicted by the model, and the model can make predictions not derived by the extracted rules. We also propose a modification to the Neural-LP model that ensures that the extracted rules are always sound and complete. Finally, we show that, on several prominent benchmarks, the classification performance of our modified model is comparable to that of the standard Neural-LP model. Thus, faithful learning of rules is feasible from both a theoretical and practical point of view. |
12:00 | Looking Inside the Black-Box: Logic-based Explanations for Neural Networks PRESENTER: João Ferreira ABSTRACT. Deep neural network-based methods have recently enjoyed great popularity due to their effectiveness in solving difficult tasks. Requiring minimal human effort, they have turned into an almost ubiquitous solution in multiple domains. However, due to the size and complexity of typical neural network models' architectures, as well as the sub-symbolical nature of the representations generated by their neuronal activations, neural networks are essentially opaque, making it nearly impossible to explain to humans the reasoning behind their decisions. We address this issue by developing a procedure to induce human-understandable logic-based theories that attempt to represent the classification process of a given neural network model. Exploring the setting of a synthetic image classification task, we provide empirical results to assess the quality of the developed theories for different neural network models, compare them to existing theories on that task, and give evidence that the theories developed through our method are faithful to the representations learned by the neural networks that they are built to describe. |
11:00 | PRESENTER: Nicolas Schwind ABSTRACT. Belief revision aims at incorporating a new piece of information into the beliefs of an agent. This process can be modeled by the standard Katsuno and Mendelzon's (KM) rationality postulates. However, these postulates suppose a classical setting and do not govern the case where (some parts of) the beliefs of the agent are not consistent. In this work we discuss how to adapt these postulates when the underlying logic is Priest's Logic of Paradox, in order to model a rational change, while being a conservative extension of KM belief revision. This paper is a summary of (Schwind, Konieczny, and Pino Pérez 2022). |
11:10 | A conditional, a fuzzy and a probabilistic interpretation of self-organising maps (Abstract) PRESENTER: Laura Giordano ABSTRACT. This is an extended abstract of the paper: Giordano L.; Gliozzi V.; and Theseider Dupre ́ D.; A conditional, a fuzzy and a probabilistic interpretation of self-organising maps. In Journal of Logic and Computation, Volume 32, Issue 2, March 2022, Pages 178-205. DOI: https://doi.org/10.1093/logcom/exab082 In this paper we establish a link between a preferential and a fuzzy semantics for description logics and self-organizing maps (SOMs), which have been proposed as possible candidates to explain the psychological mechanisms underlying category generalization. In particular, we show that the input/output behavior of a SOM after training can be described by a fuzzy description logic interpretation as well as by a preferential interpretation, based on a concept-wise multi-preferential semantics, which takes into account preferences with respect to different concepts and has been recently proposed for ranked and for weighted defeasible description logics. Properties of the network can be proven by model checking both on the preferential and on the fuzzy interpretations. Starting from the fuzzy interpretation, we also provide a probabilistic account for this neural network model based on Zadeh's probability of fuzzy events. |
11:20 | Influence-Driven Explanations for Bayesian Network Classifiers PRESENTER: Francesca Toni ABSTRACT. We propose a novel approach to building influence-driven explanations (IDXs) for (discrete) Bayesian network classifiers (BCs). IDXs feature two main advantages wrt other commonly adopted explanation methods. First, IDXs may be generated using the (causal) influences between intermediate, in addition to merely input and output, variables within BCs, thus providing a deep, rather than shallow, account of the BCs’ behaviour. Second, IDXs are generated according to a configurable set of properties, specifying which influences between variables count towards explanations. Our approach is thus flexible and can be tailored to the requirements of particular contexts or users. Leveraging on this flexibility, we propose novel IDX instances as well as IDX instances capturing existing approaches. We demonstrate IDXs’ capability to explain various forms of BCs, and assess the advantages of our proposed IDX instances with both theoretical and empirical analyses. |
11:30 | PRESENTER: Alice Tarzariol ABSTRACT. Our work addresses the generation of first-order constraints to reduce symmetries and improve the solving performance for classes of instances of a given combinatorial problem. To this end, we devise a model-oriented approach obtaining positive and negative examples for an Inductive Logic Programming task by analyzing instance-specific symmetries for a training set of instances. The learned first-order constraints are interpretable and can be used to augment a general problem encoding in Answer Set Programming. This extented abstract introduces the context of our work, contributions and results. |
11:40 | Abstract Argumentation with Markov Networks (Extended Abstract) ABSTRACT. Extended abstract for the Recently Published Research Track |
11:50 | Interpreting Neural Networks as Quantitative Argumentation Frameworks (Extended Abstract) ABSTRACT. Extended abstract for the Recently Published Research Track |
12:00 | An Axiomatic Approach to Revising Preferences (Extended Abstract) PRESENTER: Adrian Haret ABSTRACT. We study a model of preference revision in which a prior preference over a set of alternatives is adjusted in order to accommodate input from an authoritative source, while maintaining certain structural constraints (e.g., transitivity, completeness), and without giving up more information than strictly necessary. We analyze this model under two aspects: the first allows us to capture natural distance-based operators, at the cost of a mismatch between the input and output formats of the revision operator. Requiring the input and output to be aligned yields a second type of operator, which we characterize using preferences on the comparisons in the prior preference. Preference revision is set in a logic-based framework and using the formal machinery of belief change, along the lines of the well-known AGM approach: we propose rationality postulates for each of the two versions of our model and derive representation results, thus situating preference revision within the larger family of belief change operators. |
12:10 | PRESENTER: Julian Chingoma ABSTRACT. We simulate voting rules for multiwinner elections in a model of judgment aggregation that distinguishes between rationality and feasibility constraints. These constraints restrict the structure of the individual judgments and of the collective outcome computed by the rule, respectively. We extend known results regarding the simulation of single-winner voting rules to the multiwinner setting, both for elections with ordinal preferences and for elections with approval-based preferences. This not only provides us with a new tool to analyse multiwinner elections, but it also suggests the definition of new judgment aggregation rules, by generalising some of the principles at the core of well-known multiwinner voting rules to this richer setting. We explore this opportunity with regards to the principle of proportionality. Finally, in view of the computational difficulty associated with many judgment aggregation rules, we investigate the computational complexity of our embeddings and of the new judgment aggregation rules we put forward. |
Lunch will be held in Taub lobby (CP, LICS, ICLP) and in The Grand Water Research Institute (KR, FSCD, SAT).
14:00 | Sum-Product Loop Programming: From Probabilistic Circuits to Loop Programming PRESENTER: Viktor Pfanschilling ABSTRACT. Recently, Probabilistic Circuits such as Sum-Product Networks have received growing attention, as they can represent complex features but still provide tractable inference. Although quite successful, unfortunately, they lack the capability of handling control structures, such as for and while loops. In this work, we introduce Sum-Product Loop Language (SPLL), a novel programming language that is capable of tractable inference on complex probabilistic code that includes loops. SPLL has dual semantics: every program has generative semantics familiar to most programmers and probabilistic semantics that assign a probability to each possible result. This way, the programmer can describe how to generate samples almost like in any standard programming language. The language takes care of computing the probability values of all results for free at run time. We demonstrate that SPLL inherits the beneficial properties of PCs, namely tractability and differentiability while generalizing to other distributions and programs. |
14:30 | PRESENTER: Blai Bonet ABSTRACT. We consider the problem of learning generalized policies for classical planning domains without supervision and using graph neural networks (GNNs) from small instances represented in lifted STRIPS. The problem has been considered before but the proposed neural architectures are complex and the results are often mixed. In this work, we use a simple and general GNN architecture and aim at crisp experimental results and understanding: either the policy greedy in the learned value function achieves close to 100% generalization over larger instances than those in training, or the failure must be understood, and possibly fixed, logically. For this, we exploit the relation established between the expressive power of GNNs and the C2 fragment of first-order logic (namely, FOL with 2 variables and counting quantifiers). We find for example that domains with general policies that require more expressive features can be solved with GNNs once the states are extended with suitable "derived atoms", encoding role compositions and transitive closures that do not fit into C2. The work is most closely related to the GNN approach for learning optimal general policies in a supervised fashion by Ståhlberg, Bonet and Geffner 2021; yet the learned policies are no longer required to be optimal (which expands the scope as many planning domains do not have general optimal policies) and are learned without supervision. Interestingly, value-based reinforcement learning methods that aim to produce optimal policies, do not yield policies that generalize, as the goals of optimality and generality are indeed often in conflict. |
15:00 | PRESENTER: Yaniv Aspis ABSTRACT. Neuro-symbolic reasoning approaches proposed in recent years combine a neural perception component with a symbolic reasoning component to solve a downstream task. By doing so, these approaches can provide neural networks with symbolic reasoning capabilities, improve their interpretability and enable generalization beyond the training task. However, this often comes at the cost of poor training time, with potential scalability issues. In this paper, we propose a scalable neuro-symbolic approach, called Embed2Sym. We complement a two-stage (perception and reasoning) neural network architecture designed to solve a downstream task end-to-end with a symbolic optimisation method for extracting learned latent concepts. Specifically, the trained perception network generates clusters in embedding space that are identified and labelled using symbolic knowledge and a symbolic solver. With the latent concepts identified, a neuro-symbolic model is constructed by combining the perception network with the symbolic knowledge of the downstream task, resulting in a model that is interpretable and transferable. Our evaluation shows that Embed2Sym outperforms state-of-the-art neuro-symbolic systems on benchmark tasks in terms of training time by several orders of magnitude while providing similar if not better accuracy. |
14:00 | PRESENTER: Curtis Bright ABSTRACT. Lam's problem is to prove the nonexistence of a type of geometric structure that has intrigued mathematicians for centuries. In this work we reduce Lam's problem to a Boolean satisfiability (SAT) problem and generate certificates which for the first time can be used to verify the resolution of Lam's problem. The search space underlying the problem has many nontrivial symmetries—necessitating an approach that detects and removes symmetries as early as possible. We couple a method of isomorph-free exhaustive generation with a SAT solver and show that this significantly improves the performance of the solver when symmetries are present. |
14:10 | Rushing and Strolling among Answer Sets -- Navigation Made Easy (Extended Abstract) PRESENTER: Dominik Rusovac ABSTRACT. Answer set programming (ASP) is a popular declarative programming paradigm with a wide range of applications in artificial intelligence. Oftentimes, when modeling an AI problem with ASP, and in particular when we are interested beyond simple search for optimal solutions, an actual solution, differences between solutions, or number of solutions of the ASP program matter. For example, when a user aims to identify a specific answer set according to her needs, or requires the total number of diverging solutions to comprehend probabilistic applications such as reasoning in medical domains. Then, there are only certain problem specific and handcrafted encoding techniques available to navigate the solution space of ASP programs, which is oftentimes not enough. In this paper, we propose a formal and general framework for interactive navigation toward desired subsets of answer sets analogous to faceted browsing. Our approach enables the user to explore the solution space by consciously zooming in or out of sub-spaces of solutions at a certain configurable pace. We illustrate that weighted faceted navigation is computationally hard. Finally, we provide an implementation of our approach that demonstrates the feasibility of our framework for incomprehensible solution spaces. |
14:20 | A General Framework for Stable Roommates Problems using Answer Set Programming PRESENTER: Esra Erdem ABSTRACT. This is an extended abstract of our paper entitled "A General Framework for Stable Roommates Problems using Answer Set Programming", which is published in Theory and Practice of Logic Programming (https://doi.org/10.1017/S1471068420000277). The Stable Roommates problem (SR) is characterized by the preferences of agents over other agents as roommates: each agent ranks all others in strict order of preference. A solution to SR is then a partition of the agents into pairs so that each pair shares a room, and there is no pair of agents that would block this matching (i.e., who prefers the other to their roommate in the matching). There are interesting variations of SR that are motivated by applications (e.g., the preference lists may be incomplete (SRI) and involve ties (SRTI)), and that try to find a more fair solution (e.g., Egalitarian SR). Unlike the Stable Marriage problem, every SR instance is not guaranteed to have a solution. For that reason, there are also variations of SR that try to find a good-enough solution (e.g., Almost SR). Most of these variations are NP-hard. We introduce a formal framework, called SRTI-ASP, utilizing the logic programming paradigm Answer Set Programming, that is provable and general enough to solve many of such variations of SR. Our empirical analysis shows that SRTI-ASP is also promising for applications. |
14:30 | Reasoning about Cardinal Directions between 3-Dimensional Extended Objects using Answer Set Programming PRESENTER: Yusuf Izmirlioglu ABSTRACT. We propose a novel formal framework (called 3D-NCDC-ASP) to represent and reason about cardinal directions between extended objects in 3-dimensional (3D) space, using Answer Set Programming (ASP). 3D-NCDC-ASP extends Cardinal Directional Calculus (CDC) with a new type of default constraints, and NCDC-ASP to 3D. 3D-NCDC-ASP provides a flexible platform offering different types of reasoning: Nonmonotonic reasoning with defaults, checking consistency of a set of constraints on 3D cardinal directions between objects, explaining inconsistencies, and inferring missing CDC relations. We prove the soundness of 3D-NCDC-ASP, and illustrate its usefulness with applications. |
14:40 | PRESENTER: Esra Erdem ABSTRACT. Hybrid classical planning, where classical task planning is integrated with low-level feasibility checks (e.g., motion planning that utilizes collision checks), has received attention in AI and Robotics communities~\cite{ErdemHPPU11,HertleDKN12,Plaku12,KaelblingL13,SrivastavaFRCRA14,LagriffoulDBSK14,Hadfield-MenellGCA15,ErdemPS16}, due to the cognitive skills required by autonomous robots in the real-world. Since these studies rely on classical planning methods, they assume complete knowledge about the initial states, and full observability of the environment, while computing a hybrid classical plan (i.e., a sequence of actuation actions) offline. During the execution of these plans, discrepancies might occur between the expected states and the observed states, due to contingencies that were not or could not be considered during offline planning. For instance, a robot may consider that initially all utensils in a cabinet are clean, and compute a hybrid classical plan to set up a table accordingly. While executing this plan, the robot may detect (e.g., by its sensors) that the plate that it picks from the cabinet is not clean. To cope with such contingencies, the robots are usually equipped with a plan execution monitoring algorithm. According to such an algorithm, when a discrepancy is detected, the robot tries to recover from the discrepancy, generally by replanning from that point on. As an alternative approach to computing offline hybrid classical plans and then dealing with discrepancies/surprises online during plan execution by monitoring and replanning, we propose a parallel offline hybrid method, called \hcplan. This method suggests extending hybrid planning beyond classical planning, by inheriting advantages of conditional planning to deal with contingencies due to incomplete knowledge and partial observability. In conditional planning, in addition to actuation actions with deterministic outcomes, sensing actions with nondeterministic outcomes are considered as part of planning in order to gather the relevant missing knowledge. Every possible outcome of sensing actions leads to a possibly different conditional plan. Therefore, a conditional plan looks like a tree of actions, where the branching occurs at vertices that characterize sensing actions, and the other vertices denote actuation actions. Each branch of such a tree, from the root to a leaf, essentially represents a possible execution of actuation actions and sensing actions to reach a goal state from the initial state. \hcplan\ utilizes this advantageous aspect of conditional planning while computing offline plans: planning for nondeterministic sensing actions to gather missing knowledge when needed, while planning for deterministic actuation actions. Moreover, \hcplan\ integrates feasibility checks not only for executability of actuation actions, but also for executability of sensing actions. \hcplan\ is a parallel algorithm: it computes a hybrid conditional plan by intelligently orchestrating the computation of its branches in parallel. According to \hcplan, computation of every branch starting from a vertex that characterizes a sensing action is viewed as a computation task with a priority (e.g., defined relative to the depth of that vertex), and the batches of computation tasks are solved in parallel with respect to their priorities so as to consume the computational resources more effectively. Furthermore, \hcplan\ avoids re-computation of the same task that may appear at different parts of the tree. Each computation task handled by \hcplan\ takes as input an incomplete initial state, and returns a hybrid sequential plan (i.e., a sequence of deterministic actuation actions and nondeterministic sensing actions) that describes a branch of the tree. A hybrid sequential plan is not a hybrid classical plan, since it may involve nondeterministic sensing actions as well. Therefore, solving a computation task (i.e., the computation of each branch of a hybrid conditional plan) also requires cognitive abilities beyond hybrid classical planning. For that, \hcplan\ adapts some advantages of causality-based non-monotonic logics~\cite{Turner08,GelfondL98} for a novel solution to deal with these challenges. In particular, \hcplan\ utilizes defaults to express assumptions (e.g., the location of an object remains to be the same unless changed by an actuation action), exogeneity of actions to express that sensing actions may occur at any time when possible, and nondeterministic causal laws to choose an outcome of a sensing action included in the computation of a branch. The defaults are useful for formalizing assumptions in that, when an exception occurs contrary to the assumptions, it does not lead to an inconsistency. This is an important aspect of defaults, providing a solution to the famous frame problem. Exogeneity of actions is useful in that it is not required in advance to specify the order of nondeterministic sensing actions while solving a task. \hcplan\ uses the nonmonotonic reasoning system \ccalc~\cite{McCainT97} to compute each branch of a hybrid conditional plan. We show a real-world application of \hcplan\ where a mobile bi-manual service robot sets up a table for lunch. While computing a hybrid conditional plan, the robot has to consider various contingencies that are not known in advance. Furthermore, while planning for its actions, the robot has to consider different types of feasibility checks (e.g., reachability, graspability, collisions), as well as commonsense knowledge. Once a hybrid conditional plan is computed by \hcplan, we also illustrate possible executions of this plan. To evaluate the strengths and weaknesses of \hcplan\ by means of experimental evaluations, we construct a benchmark suite over the kitchen table setup domain. We evaluate \hcplan\ from the perspectives of computation time and plan quality, compare it with the closely related planners, and with an execution monitoring algorithm that utilizes hybrid planning and guided re-planning. We refer the reader to our journal paper~\cite{NoumanPE2021} for further information. |
14:50 | LACE: A Logical Approach to Collective Entity Resolution PRESENTER: Meghyn Bienvenu ABSTRACT. In this paper, we revisit the problem of entity resolution and propose a novel, logical framework, LACE, which mixes declarative and procedural elements to achieve a number of desirable properties. Our approach is fundamentally declarative in nature: it utilizes hard and soft rules to specify conditions under which pairs of entity references must or may be merged, together with denial constraints that enforce consistency of the resulting instance. Importantly, however, rule bodies are evaluated on the instance resulting from applying the already ‘derived’ merges. It is the dynamic nature of our semantics that enables us to capture collective entity resolution scenarios, where merges can trigger further merges, while at the same time ensuring that every merge can be justified. As the denial constraints restrict which merges can be performed together, we obtain a space of (maximal) solutions, from which we can naturally define notions of certain and possible merges and query answers. We explore the computational properties of our framework and determine the precise computational complexity of the relevant decision problems. Furthermore, as a first step towards implementing our approach, we demonstrate how we can encode the various reasoning tasks using answer set programming. |
15:00 | Rule Learning over Knowledge Graphs with Genetic Logic Programming PRESENTER: Emanuel Sallinger ABSTRACT. Using logical rules plays an important role in Knowledge Graph (KG) construction and completion. Such rules not only encode the expert background knowledge and the relational patterns among the data, but also infer new knowledge and insights from them. Due to the exponential search space, current approaches resort to exhaustive search with a number of heuristics and syntactic restrictions on the rule language, which limits the quality of the outcome rules. In this paper, we extend the rule hypothesis space to general Datalog rule space by proposing a novel genetic logic programming algorithm named Evoda. It has an iterative process to learn high-quality rules over large scale KG for a matter of seconds. We performed experiments over multiple real-world KGs and various evaluation metrics to show its mining capabilities for higher quality rules and making more precise predictions in the extended rule space. Additionally, we applied it on the KG completion tasks to illustrate its competitiveness with several state-of-the-art embedding models. |
17:00 | PRESENTER: Tommaso Flaminio ABSTRACT. Conditionals play a key role in different areas of logic and probabilistic reasoning, and they have been studied and formalised from different angles. In this paper we focus on the de Finetti's notion of conditional as a three-valued object, with betting-based semantics, and its related approach as random quantity as mainly developed by two of the authors. Compound conditionals have been studied in the literature, but not in full generality. In this paper we provide a natural procedure to explicitly attach conditional random quantities to arbitrary compound conditionals that also allows us to compute their previsions. By studying the properties of these random quantities, we show that, in fact, the set of compound conditionals can be endowed with a Boolean algebraic structure. In doing so, we pave the way to build a bridge between the long standing tradition of three-valued conditionals and a more recent proposal of looking at conditionals as elements from suitable Boolean algebras. |
17:30 | A General Framework for Modelling Conditional Reasoning - Preliminary Report PRESENTER: Giovanni Casini ABSTRACT. We introduce and investigate here a formalisation for conditionals that allows the definition of a broad class of reasoning systems. This framework covers the most popular kinds of conditional reasoning in logic-based KR: the semantics we propose is appropriate for a structural analysis of those conditionals that do not satisfy closure properties associated to classical logics. |
17:00 | Act for Your Duties but Maintain Your Rights PRESENTER: Shufang Zhu ABSTRACT. Most of the synthesis literature has focused on studying how to synthesize a strategy to fulfill a task. This task is a duty for the agent. In this paper, we argue that intelligent agents should also be equipped with rights, that is, tasks that the agent itself can choose to fulfill (e.g., the right of recharging the battery). The agent should be able to maintain these rights while acting for its duties. We study this issue in the context of LTLf synthesis: we give duties and rights in terms of LTLf specifications, and synthesize a suitable strategy to achieve the duties that can be modified on-the-fly to achieve also the rights, if the agent chooses to do so while executing it. We show that handling rights does not make synthesis substantially more difficult, although it requires a more sophisticated solution concept than standard LTLf synthesis. We also extend our results to the case in which further duties and rights are given to the agent while already executing. |
17:30 | PRESENTER: Francois Schwarzentruber ABSTRACT. We compare the syntactic approach to the representation of epistemic states in the multi-agent domain with the possible-world semantic approach. The syntactic approach exploits belief bases and models belief change by means of dynamic operators for belief base expansion. The semantic approach relies on multi-relational Kripke models and represents belief change through so-called action models of Dynamic Epistemic Logic (DEL). We first show how to translate a formula of the belief base approach into DEL: in particular, we provide a specific action model scheme corresponding to the process of expanding an agent's belief base with a formula. Conversely, we identify a fragment of DEL that can be translated into the multi-agent dynamic epistemic language interpreted on belief bases. |
pickup at 18:00 from the Technion (Tour at Haifa, no food will be provided)