JELIA 2023: 18TH EUROPEAN CONFERENCE ON LOGICS IN ARTIFICIAL INTELLIGENCE
PROGRAM FOR THURSDAY, SEPTEMBER 21ST
Days:
previous day
next day
all days

View: session overviewtalk overview

08:40-10:10 Session 10: Reasoning about Causes and Dependencies
Location: APB E023
08:40
Logics with Probabilistic Team Semantics and the Boolean Negation

ABSTRACT. We study the expressivity and complexity of various logics in probabilistic team semantics with the Boolean negation. In particular, we study the extension of probabilistic independence logic with the Boolean negation, and a recently introduced logic FOPT. We give a comprehensive picture of the relative expressivities of these logics together with the most studied logics in probabilistic team semantics setting, as well as relate their expressivity to a numerical variant of second-order logic. In addition, we introduce novel entropy atoms and show that the extension of first-order logic by entropy atoms subsumes probabilistic independence logic. Finally, we obtain some results on the complexity of model checking, validity, and satisfiability of our logics.

08:58
Strongly Complete Axiomatization for a Logic with Probabilistic Interventionist Counterfactuals

ABSTRACT. Causal multiteam semantics is a framework where probabilistic notions and causal inference can be studied in a unified setting. We study a logic (PCO) that features marginal probabilities and interventionist counterfactuals, and allows expressing conditional probability statements, "do" expressions and other mixtures of causal and probabilistic reasoning. Our main contribution is a strongly complete infinitary axiomatisation for PCO.

09:16
Boosting Definability Bipartition Computation using SAT Witnesses

ABSTRACT. Bipartitioning the set of variables Var (Σ) of a propositional formula Σ w.r.t. definability consists in pointing out a bipartition ⟨I, O⟩ of Var (Σ) such that Σ defines the variables of O (outputs) in terms of the variables in I (inputs), i.e., for every o ∈ O, there exists a formula Φo over I such that o ⇔ Φo is a logical consequence of Σ. The existence of Φo given o, I, and Σ is a coNP-complete problem, and as such, it can be addressed in practice using a SAT solver. From a computational perspective, definability bipartitioning has been shown as a valuable preprocessing technique for model counting, a key task for a number of AI problems involving probabilities. To maximize the benefits offered by such a preprocessing, one is interested in deriving subset-minimum bipartitions in terms of input variables, i.e., for every i ∈ I, ⟨I \{i}, O∪{i}⟩ is not a definability bipartition. We show how the computation of subset-minimum bipartitions can be boosted by leveraging not only the decisions furnished by SAT solvers (as done in previous approaches), but also the SAT witnesses (models and cores) justifying those decisions.

09:34
Formalizing Statistical Causality via Modal Logic

ABSTRACT. We propose a formal language for describing and explaining statistical causality. Concretely, we define Statistical Causality Language (StaCL) for expressing causal effects and specifying the requirements for causal inference. StaCL incorporates modal operators for interventions to express causal properties between probability distributions in different possible worlds in a Kripke model. We formalize axioms for probability distributions, interventions, and causal predicates using StaCL formulas. These axioms are expressive enough to derive the rules of Pearl's do-calculus. Finally, we demonstrate by examples that StaCL can be used to specify and explain the correctness of statistical causal inference.

09:52
Hybrid Modal Operators for Definite Descriptions

ABSTRACT. Definite descriptions, that is, expressions of the form 'the x such that φ(x)' constitute a long-standing topic of research in logic, philosophy, and linguistics. More recently, they have been studied in the context of ontology-based data management, where they are used to replace obscure object identifiers and perform complex query answering. In this paper, we aim to obtain a better understanding of the computational complexity and expressive power of operators for definite descriptions. We show that adding such operators to the basic (propositional) modal language has a price of increasing complexity of the satisfiability problem from PSpace to ExpTime. However, if descriptions are Boolean only, there is no increase of complexity. Furthermore, we compare definite descriptions with the related operators from hybrid and counting logics, which are widely studied in the literature and also allow us for expressing specific forms of reference. We prove that the operators for definite descriptions are strictly more expressive than hybrid operators, but strictly less expressive than counting operators. We also study the behaviour of definite descriptions over linear structures, which are often exploited in temporal logics. We show that in this setting the expressive power results are the same as in the general case; in contrast, if the linear structures are isomorphic to integers, definite descriptions become as expressive as counting operators.

10:10-10:40Coffee Break
10:40-11:16 Session 11: Weighted Automata and Data Graphs
Location: APB E023
10:40
Deterministic Weighted Automata under Partial Observability

ABSTRACT. Weighted automata is a basic tool for specification in quantitative verification, which allows to express quantitative features of analysed systems such as resource consumption. Quantitative specification can be facilitated by automata learning as there are classic results on Angluin-style learning of weighted automata. The existing work assumes perfect information about the values returned by the target weighted automaton. In assisted synthesis of a quantitative specification, knowledge of the exact values is a strong assumption and may be infeasible. In our work, we address this issue by introducing a new framework of partially-observable deterministic weighted automata, in which weighted automata return intervals containing the computed values of words instead of the exact values. We study the basic properties of this framework with the particular focus on the challenges of active learning partially-observable deterministic weighted automata.

10:58
Data Graphs with Incomplete Information (and a Way to Complete Them)
PRESENTER: Valentin Cassano

ABSTRACT. We introduce a modal language for reasoning about data graphs with incomplete information. Such data graphs are formally represented as models in which data value functions are partial — to capture what is unknown. In this setting, we also allow for unknown data values to be learned. We provide a sound and strongly complete axiomatization.

11:16-12:41 Session 12: Reasoning About Knowledge and Beliefs
Location: APB E023
11:16
Splitting Techniques for Conditional Belief Bases in the Context of c-Representations

ABSTRACT. Splitting belief bases is fundamental for efficient reasoning and for better understanding interrelationships among the knowledge entities. In this paper, we survey the most important splitting techniques for conditional belief bases in the context of c-representations which constitute a specific class of ranking models with outstanding behavior not only with respect to belief base splitting, as shown in recent papers. We provide a splitting hierarchy, in particular by proving that safe conditional syntax splittings and case splittings are so-called CSP-constraint splittings. We advance the level of knowledge about CSP-constraint splittings and present an algorithm for computing CSP-constraint splittings.

11:34
Belief Reconfiguration

ABSTRACT. We study a generalisation of iterated belief revision in a setting where we keep track not only of the received information (in the form of messages) but also of the source of each message. We also suppose that we have a special source, the oracle, which never fails. That is, all of the information provided by the oracle is assumed to be correct. We then evaluate the reliability of each source by confronting her messages with the facts given by the oracle. In this case it is natural to give higher priority to messages coming from more reliable sources. We therefore re-order (reconfigurate) the messages with respect to the reliability of the sources before performing the iterated belief revision. We study how to compute this reliability, and the properties of the corresponding reconfiguration operators.

11:52
How Easy it is to Know How: An Upper Bound for the Satisfiability Problem
PRESENTER: Valentin Cassano

ABSTRACT. We investigate the complexity of the satisfiability problem for a modal logic expressing `knowing how' assertions, related to an agent's abilities to achieve a certain goal. We take one of the most standard semantics for this kind of logics based on linear plans. Our main result is a proof that checking satisfiability of a `knowing how' formula can be done in $\Sigma_2^P$. The algorithm we present relies on eliminating nested modalities in a formula, and then performing multiple calls to a satisfiability checking oracle for propositional logic.

12:10
Non-standard Modalities in Paraconsistent Gödel Logic

ABSTRACT. We introduce a paraconsistent expansion of the G\"{o}del logic with a De Morgan negation $\neg$ and modalities $\blacksquare$ and $\blacklozenge$. We dub the logic $\infoGsquare$ and equip it with Kripke semantics on frames with two (possibly fuzzy) relations: $R^+$ and $R^-$ (interpreted as the degree of trust in affirmations and denials by a given source) and valuations $v_1$ and $v_2$ (positive and negative support) ranging over $[0,1]$ and connected via $\neg$.

We motivate the semantics of $\blacksquare\phi$ (resp., $\blacklozenge\phi$) as infima (suprema) of both positive and negative supports of $\phi$ in $R^+$- and $R^-$-accessible states, respectively. We then prove several instructive semantical properties of $\infoGsquare$. Finally, we devise a~tableaux system for $\infoGsquare$ over finitely branching frames and establish the complexity of satisfiability and validity.

12:28
Base-based Model Checking for Multi-Agent Only Believing

ABSTRACT. We present a novel semantics for the language of multi-agent only believing exploiting belief bases, and show how to use it for automatically checking formulas of this language and of its dynamic extension with private belief expansion operators. We provide a PSPACE algorithm for model checking relying on a reduction to QBF and alternative dedicated algorithm relying on the exploration of the state space. We present an implementation of the QBF-based algorithm and some experimental results on computation time in a concrete example.

12:41-14:10Lunch Break
14:10-15:00 Session 13: Keynote: Vaishak Belle
Location: APB E023
14:10
Excursions in First-order Logic and Probability: Infinitely many Random Variables, Continuous Distributions, Recursive Programs and Beyond

ABSTRACT. The unification of the first-order logic and probability has been seen as a long-standing concern in philosophy, AI and mathematics. In this talk, I will briefly review our recent results on revisiting that unification. Although there are plenty of approaches in communities such as statistical relational learning, automated planning, and neuro-symbolic AI that leverage and develop languages with logical and probabilistic aspects, they almost always restrict the representation as well as the semantic framework in various ways which do not fully explain how to combine first-order logic and probability theory in a general way. In many cases, this restriction is justified because it may be necessary to focus on practicality and efficiency. However, the search for a restriction-free mathematical theory remains ongoing. In this article, we discuss our recent results regarding the development of languages that support arbitrary quantification, possibly infinitely many random variables, both discrete and continuous distributions, as well as programming languages built on top of such features to include recursion and branching control.

15:00-15:31 Session 14: Logics for Explainable and Trustworthy AI I
Location: APB E023
15:00
A New Class of Explanations for Classifiers with Non-Binary Features

ABSTRACT. Two types of explanations have received significant attention in the literature recently when analyzing the decisions made by classifiers. The first type explains why a decision was made and is known as a sufficient reason for the decision, also an abductive or PI-explanation. The second type explains why some other decision was not made and is known as a necessary reason for the decision, also a contrastive or counterfactual explanation. These explanations were defined for classifiers with binary, discrete and, in some cases, continuous features. We show that these explanations can be significantly improved in the presence of non-binary features, leading to a new class of explanations that relay more information about decisions and the underlying classifiers. Necessary and sufficient reasons were also shown to be the prime implicates and implicants of the complete reason for a decision, which can be obtained using a quantification operator. We show that our improved notions of necessary and sufficient reasons are also prime implicates and implicants but for an improved notion of complete reason obtained by a new quantification operator that we define and study in this paper.

15:18
Declarative Reasoning on Explanations Using Constraint Logic Programming

ABSTRACT. Explaining opaque Machine Learning (ML) models is an increasingly relevant problem. Current explanation in AI (XAI) methods suffer several shortcomings, among others an insufficient incorporation of background knowledge, and a lack of abstraction and interactivity with the user. We propose REASONX, an explanation method based on Constraint Logic Programming (CLP). REASONX can provide declarative, interactive explanations for decision trees, which can be the ML models under analysis or global/local surrogate models of any black-box model. Users can express background or common sense knowledge using linear constraints and MILP optimization over features of factual and contrastive instances, and interact with the answer constraints at different levels of abstraction through constraint projection. We present here the architecture of REASONX, which consists of a Python layer, closer to the user, and a CLP layer. REASONX's core execution engine is a Prolog meta-program with declarative semantics in terms of logic theories.

15:31-16:00Coffee Break
16:00-17:20 Session 15: Logics for Explainable and Trustworthy AI II
Location: APB E023
16:00
Efficient Computation of Shap Explanation Scores for Neural Network Classifiers via Knowledge Compilation

ABSTRACT. The use of Shap scores has become widespread in Explainable AI. However, their computation is in general intractable, in particular when done with a black-box classifier, such as neural network. Recent research has unveiled classes of open-box Boolean Circuit classifiers for which Shap can be computed efficiently. We show how to transform binary neural networks into those circuits for efficient Shap computation.We use logic-based knowledge compilation techniques. The performance gain is huge, as we show in the light of our experiments.

16:18
Short Boolean Formulas as Explanations in Practice

ABSTRACT. We investigate explainability via short Boolean formulas in the data model with unary relations. As an explanation of length k, we take a Boolean formula of length k that minimizes the error with respect to the target attribute to be explained. We first provide novel quantitative bounds for the expected error in this scenario. We then also demonstrate how the setting works in practice by studying three concrete data sets. In each case, we calculate explanation formulas of different lengths using an encoding in Answer Set Programming. The most accurate formulas we obtain achieve errors similar to other methods on the same data sets. However, due to overfitting, these formulas are not necessarily ideal explanations, so we use cross validation to identify a suitable length for explanations. By limiting to shorter formulas, we obtain explanations that avoid overfitting but are still reasonably accurate and also, importantly, human interpretable.

16:36
Contrastive Explanations for Answer-Set Programs

ABSTRACT. Answer-Set Programming (ASP) is a popular declarative reasoning and problem solving formalism. Due to the increasing interest in explainabilty, several explanation approaches have been developed for ASP. However, while those formalisms are correct and interesting on their own, most more technical and less oriented towards philosophical or social concepts of explanation. In this work, we study the notion of contrastive explanation, i.e, answering question of the form ``Why a instead of b?'', in the context of ASP. In particular, we are interested in answering why an atom is included in an answer-set, whereas another is not. Contrastive explainabilty has recently become popular, due to its strong support from the philosophical, cognitive and social sciences and its apparent ability to provide explanations that are concise and intuitive for humans. We formally define contrastive explanations for ASP based on counterfactual reasoning about programs. Furthermore, we demonstrate the usefulness of the concept on example applications and give some complexity results. The latter also provide a guideline as to how the explanations can be computed in practice.

16:54
Stable Normative Explanations: From Argumentation to Deontic Logic

ABSTRACT. This paper examines how a notion of stable explanation developed elsewhere in Defeasible Logic can be expressed in the context of formal argumentation. With this done, we discuss the deontic meaning of this reconstruction and show how to build from argumentation neighborhood structures for deontic logic where this notion of explanation can be characterised. Some direct complexity results are offered.

17:07
Logic, Accountability and Design
PRESENTER: Pedro Cabalar

ABSTRACT. Logic can and should play a pivotal role in helping to make AI systems explainable and ultimately more accountable. However, this puts a burden of responsibility on logic itself. Can we trust applied log- ical systems, many of them in competition with one another, to provide fair and rational conclusions leading to acceptable decisions and actions? We propose a principled approach to the design of applied logics and a preliminary set of desiderata to support accountability for logic-based systems in knowledge representation and reasoning.