JELIA 2025: 19TH EUROPEAN CONFERENCE ON LOGICS IN ARTIFICIAL INTELLIGENCE
PROGRAM FOR MONDAY, SEPTEMBER 1ST
Days:
next day
all days

View: session overviewtalk overview

09:10-10:30 Session 1: Invited Talk + Special Track
09:10
Logic for Safe Reinforcement Learning

ABSTRACT. Reinforcement learning is famously about learning by trial and error, the agent getting feedback from the environment in the form of rewards. However, it is often quite difficult to specify a reward function to ensure that the learned behaviour is what the designer intended. It is also difficult to ensure that the agent’s behaviour is always safe, especially when training is on- rather than offline. I will talk about using logic to provide declarative specification of desired/safe behaviours and ensuring that the learned behaviour conforms to those specifications. In particular, I will talk about how to block unsafe actions both during training and after deployment of the agent.

10:10
Reinforcement Learning Meets Logic Programming: Towards Explainable AI

ABSTRACT. This paper introduces a neuro-symbolic framework designed to predict and explain subsequent facts from current observations. Facts are generated through causal relationships, which can be modeled by a set of propositional logic rules representing the domain knowledge. However, these rules remain unknown to the agent. By observing the facts, the agent constructs an approximation of them, which is then used to predict and explain new facts. The proposed framework can learn and adapt to different environments modeled by various forms of logic programs, also handling negation and recursion. Most notably, it can handle dynamic environments whose structure evolves over time. In these scenarios, the agent modifies its understanding of the environment to capture new observations, guaranteeing that its model of the domain knowledge remains up-to-date. To achieve this goal, our approach leverages the A2C (Advantage Actor-Critic) reinforcement learning algorithm. This choice allows us to integrate reinforcement learning principles into our logic framework. Through this research, we aspire to contribute to the development of explainable neuro-symbolic Artificial Intelligence systems in dynamic environments.

10:30-11:00Coffee Break
11:00-12:30 Session 2: Propositional Reasoning, QBF, and Satisfiability Problems - 1/2
11:00
Exact Approaches for the Diverse Satisfiability Problem

ABSTRACT. Given a Conjunctive Normal Form (CNF) formula $\phi$ and a positive integer $k$, the problem of Diversity Satisfiability (Diverse SAT) consists in finding $k$ assignments satisfying $\phi$ while maximizing the sum of pairwise distances among them. Previous related works mainly focus on the design of heuristic algorithms, while this paper focuses on introducing exact methods. A quadratic integer programming (QIP) model is first proposed. Then, based on two novel weight functions, namely the direct weight (DW) and the incremental weight (IW), we reformulate the QIP models into two integer linear programming (ILP) models, which can also be adapted to Maximum Satisfiability (MaxSAT) instances. Extensive experiments were conducted to evaluate the efficiency of the proposed methods, showing that the proposed approaches are able to compute optimal solutions for the studied instances with linear models achieving Superior performance.

11:20
Maximum Satisfiability Formulations for Nonlinear Integer Programming

ABSTRACT. This paper introduces novel Maximum Satisfiability (MaxSAT) formulations for encoding Nonlinear Integer Programming (NLIP) problems with polynomial functions. Specifically designed for discrete polynomial functions, we develop a generic framework based on three established integer encoding techniques from the literature. Our approach treats each polynomial term as an atomic unit and demonstrates how these can be efficiently encoded through compact representations of integer assignments. Additionally, we present two distinct decomposition methods based on polynomial term degrees. This work lays the theoretical foundation for future research and has the potential to extend the applicability of modern MaxSAT solvers to a wider range of optimization problems.

11:40
Inclusion with repetitions and Boolean constants -- implication problems revisited

ABSTRACT. Inclusion dependencies form one of the most widely used dependency classes. We expand existing results on the axiomatization and computational complexity of their implication problem to two extended variants. First, we present an alternative completeness proof for standard inclusion dependencies and generalize it to inclusion dependencies with repetitions that can express equalities between attributes. The proof uses only two values, enabling us to work within the Boolean setting. Furthermore, we study inclusion dependencies with Boolean constants, provide a complete axiomatization, and show that no such system is k-ary. We also establish that the decision problems for both extended versions remain PSPACE-complete. The extended inclusion dependencies are common in team semantics, which serves as the formal framework for the results.

12:00
On Extracting Legal Arguments

ABSTRACT. Building on the principles of case-based reasoning, we investigate the extraction of arguments from legal case databases. An argument is modeled as a set of factors that frequently support one party (plaintiff or defendant) over the other. The relevance of an argument is assessed by the number of cases that confirm it versus those that contradict it. Following established practices in data mining, we introduce a condensed representation of arguments called closed arguments, which capture the strongest form of support given the factors they contain. We develop propositional SAT-based encodings to enable the extraction of both arguments and closed arguments using SAT solvers. Additionally, we define a more compact condensed representation called maximal arguments, which eliminates redundancy by retaining only the most informative arguments with respect to the given thresholds. To extract these, we propose a level-wise algorithm that builds on our SAT-based approach for argument extraction. Preliminary experiments demonstrate the feasibility of our SAT-based mining methods.

12:20
Interpolating Parametric Array Theories

ABSTRACT. Parametric array theories are extensions of the quantifier-free theory of arrays with relations that hold componentwise. We show that these theories retain rich (general and uniform) quantifier-free interpolation properties. Our results include the interpolation properties of the simple flat array fragment, which were left open in the literature.

12:30-14:00Lunch Break
14:00-15:30 Session 3: Non-monotonic Reasoning and Belief Change + Description Logics
14:00
Comparing Dialectical Systems: Contradiction and Counterexample in Iterated Belief Revision

ABSTRACT. Dialectical systems are a mathematical formalism for modeling an agent engaged in iterated belief revision over a stream of arguments. Originally introduced to capture how a working mathematician---or a research community---refines beliefs in the pursuit of truth, these systems also serve as natural models for the iterated belief revision of an automated agent.

The literature distinguishes three main models of dialectical systems: (d-)dialectical systems based on revising beliefs when they are seen to be inconsistent, p-dialectical systems based on revising beliefs based on finding a counterexample, and q-dialectical systems which can do both.

We answer an open problem in the literature by proving that q-dialectical systems are strictly more powerful than p-dialectical systems, which are themselves known to be strictly stronger than (d-)dialectical systems. This result highlights the complementary roles of counterexample and contradiction in the reasoning processes of mathematicians and research communities. Also, in the context of automated belief revision, it underscores the necessity of incorporating both forms of feedback for adaptive reasoning.

14:20
Extending Defeasibility for Propositional Standpoint Logics

ABSTRACT. In this paper, we introduce a new defeasible version of propositional standpoint logic by integrating Kraus et al.’s defeasible conditionals, Britz and Varzinczak’s notions of defeasible necessity and distinct possibility, along with Leisegang et al.’s approach to defeasibility into the standpoint logics of Gómez Álvarez and Rudolph. The resulting logical framework allows for the expres- sion of defeasibility on the level of implications, standpoint modal operators, and standpoint-sharpening statements. We provide a preferential semantics for this extended language and propose a tableau calculus, which is shown to be sound and complete with respect to preferential entailment. We also establish the computational complexity of the tableau procedure to be in PSPACE.

14:40
Towards Practicable Defeasible Reasoning for ABoxes

ABSTRACT. Defeasible reasoning, that is, reasoning with rules that allow for exceptions, has been a longstanding challenge in knowledge representation. The KLM paradigm has been very successful for propositional logics, but its application to DLs has produced mixed results. Most approaches have at least one major drawback in terms of semantics, complexity, or generality of the formalism. Although a few solutions for reasoning with defeasible TBox axioms have been proposed, ABoxes have largely been neglected. In this paper, we consider defeasible inclusions in a very expressive DL with closed predicates, restricted in a manner that circumvents some of the major semantic challenges and computational bottlenecks of related approaches. Our approach allows drawing defeasible inferences about given objects without increasing the computational complexity of the underlying DL. We especially look at the data complexity of defeasible reasoning and show that even highly restricted settings are hard for the second level of the polynomial hierarchy. Nevertheless, we identify a restricted fragment that ensures tractable reasoning.

15:00
Closure-Based Tractable Possibilistic Inference from Partially Ordered DL-Lite Ontologies

ABSTRACT. Handling partially specified inconsistent information is a major challenge, especially when balancing the richness of query answers with the need to control computational complexity. We address this challenge by proposing an efficient method for computing a consistent and enriched fragment of data, known as a repair, within knowledge bases that rely on a stable terminological component and incorporate partially ordered uncertainty in the data (ABox). Our approach, grounded in possibility theory, avoids exhaustive enumeration of conflicts or justifications by applying a positive deductive closure directly to the partially ordered ABox. This enables repair computation through simple consistency checks over data subsets, ensuring tractability while supporting an extended set of plausible inferences. Beyond this main contribution on efficient repair computation, we briefly introduce a semantic characterisation of repairs that generalises the classical notion of models for consistent knowledge bases. Finally, we present an experimental evaluation against existing possibilistic approaches, demonstrating both practical effectiveness and computational benefits.

15:20
The InfOCF Library for Reasoning With Conditional Belief Bases

ABSTRACT. This paper presents an overview of the Python library InfOCF that provides powerful tools for working with conditional belief bases consisting of defeasible rules of the form “If A, then usually B”. Because many operations on belief bases, like checking their consistency or performing nonmonotonic reasoning for answering queries, require solving propositional satisfiability problems and generalizations thereof, InfOCF builds upon the power of current SMT and MaxSAT solvers. For achieving solver independence, established interfaces like PySMT are used, allowing the user to select from different solvers. Multiple queries can be run in parallel for speeding up the answering process. Besides its rigorous focus on modularity and extensibility, further notable features of InfOCF include comprehensive methods for caching program states enabling the reuse of intermediate results across different queries to the same belief base. Successful applications realized with and now available in InfOCF cover state-of-the-art implementations of nonmonotonic reasoning with p-entailment, system Z, lexicographic inference, c-inference, and system W; each of these implementations scales up and outperforms all previous implementations of the corresponding inference operator by an order of magnitude.

15:30-16:00Coffee Break
16:00-16:40 Session 4: Deontic Reasoning
16:00
GL-based calculi for PCL and its deontic cousin

ABSTRACT. We introduce a natural sequent calculus for preferential conditional logic PCL via embeddings into provability logic GL, achieving optimal complexity and enabling countermodel extraction. Extending the method to PCL with reflexivity and absoluteness – corresponding to Åqvist’s deontic system F with cautious monotony – we employ hypersequents to capture the S5 modality; the resulting calculus subsumes the known calculi for the weaker systems E and F within Åqvist family.

16:20
Dual Scale Detachment

ABSTRACT. Drawing motivation from the philosophical literature on normative reasons, detachment systems model the way that interaction or competition between reasons determines deontic status of options. This paper presents detachment systems that capture the ``dual scale model'' of weighing reasons, as outlined in Chris Tucker's recent book ``The Weight of Reasons''. The dual scale model relaxes an assumption of the standard ``single scale model'' that much of the informal philosophical literature on reasons relies on, explicitly or implicitly. We define dual scale, single scale, and two further detachment systems, as well as provide a principle-based analysis comparing these systems.

16:40-17:30 Session 5: Special Track
16:40
Formal Explanations of Black-Box Ranking Functions

ABSTRACT. Ranking functions play a crucial role in supporting the decision-making process across various critical domains. Given their widespread use, coupled with the fact that these functions are often directly learned from data, it is becoming more and more important to provide explanations that make the underlying models more transparent. In this paper, we propose the first formal approach to explain ranking functions. Our approach is model-agnostic, requiring only black-box access to the ranking function. We study the formal properties of this new approach, including an analysis of the complexity of computing an explanation. To demonstrate its feasibility, we implement our approach and conduct an experimental evaluation using as a case study a neural network model for predicting breast cancer recurrence.

17:00
Why this and not that? A Logic-based Framework for Contrastive Explanations

ABSTRACT. We define several canonical problems related to contrastive explanations, each answering a question of the form ``Why P but not Q?''. The problems compute causes for both P and Q, explicitly comparing their differences. We investigate the basic properties of our definitions in the setting of propositional logic. We show, inter alia, that our framework captures a cardinality-minimal version of existing contrastive explanations in the literature. Furthermore, we provide an extensive analysis of the computational complexities of the problems. We also implement the problems for CNF-formulas using answer set programming and present several examples demonstrating how they work in practice.

17:20
A Uniform Language for Safety, Robustness and Explainability

ABSTRACT. The areas of safety and robustness are key areas where communities from verification, logic, AI, and machine learning come together. Safety and robustness are often formalized in terms of point-wise metrics: given an input point, we identify a circle or a region where certain properties hold in terms of the consistency of prediction. However, the broader goal of logical AI applied to machine learning correctness would ideally integrate safety and robustness conditions with explanations. Nonetheless, there is no paper that discusses these properties in a unified manner.

What we consider in this paper is a new simple framework for formalizing a variety of such properties. We are able to characterize the robustness condition, safety conditions, hyper-safety conditions, counterfactual explanations, and fairness, among others. We can express these properties using simple notation for an abstract model based on a binary classifier. We hope these definitions would lead to logic-based frameworks that contribute to all of these areas jointly.