FLOC 2022: FEDERATED LOGIC CONFERENCE 2022
XLOKR ON SUNDAY, JULY 31ST

View: session overviewtalk overviewside by side with other conferences

08:30-09:00Coffee & Refreshments
09:00-10:30 Session 1K

XLoKR Session 1

Location: Ullmann 202
09:00
Invited Talk: Explaining and Arguing with Facts and Rules

ABSTRACT. Many existing techniques in Explainable AI, perhaps most of them, are based on relatively simple statistical summaries that capture the behavior of large classifiers. However, explanations often require more refined reasoning, sometimes in contexts that demand quick responses. To illustrate such scenarios, we start this talk by looking at explanations within recommendation systems, in particular examining the value of knowledge graphs in producing arguments for and against recommended items. We then move to techniques that explain the links stored in a knowledge graph themselves, discussing ways to combine data-driven embeddings with rule-based reasoning. And from there we move to explanations that rely on arguments extracted from facts and rules, sometimes obtained through text processing. We examine formalisms that allow for probabilistic argumentation in the presence of negation and cyclic defeating paths, as those seem to offer the necessary foundation for such explanation settings. The connection with various logical formalisms can then be explored to produce results on complexity and expressivity.

10:00
Using Abstraction for Interpretable Robot Programs in Stochastic Domains
PRESENTER: Till Hofmann

ABSTRACT. A robot's actions are inherently stochastic, as its sensors are noisy and its actions do not always have the intended effects. For this reason, the agent language Golog has been extended to models with degrees of belief and stochastic actions. While this allows more precise robot models, the resulting programs are much harder to comprehend, because they need to deal with the noise, e.g., by looping until some desired state has been reached with certainty, and because the resulting action traces consist of a large number of actions cluttered with sensor noise. To alleviate these issues, we propose to use abstraction. We define a high-level and nonstochastic model of the robot and then map the high-level model into the lower-level stochastic model. The resulting programs are much easier to understand, often do not require belief operators or loops, and produce much shorter action traces.

10:30-11:00Coffee Break
11:00-12:30 Session 10P

XLoKR Session 2

Location: Ullmann 202
11:00
Explaining Description Logic Entailments in Practice with Evee and Evonne
PRESENTER: Stefan Borgwardt

ABSTRACT. When working with description logic ontologies, understanding entailments derived by a description logic reasoner is not always straightforward. So far, the standard ontology editor Protégé offers two services to help: (black-box) justifications for OWL 2 DL ontologies, and (glass-box) proofs for lightweight OWL EL ontologies, where the latter exploits the proof facilities of reasoner Elk. Since justifications are often insufficient for explaining inferences, there is thus only little tool support for explaining inferences in more expressive DLs. To solve these issues, we present the two sisters Evee and Evonne that provide proof-based explanations for more expressive DLs. Evee consists of a Java library for generating proofs (Evee-libs), and a plugin for Protégé to show these proofs (Evee-protege). Evonne is a more avdanced proof visualisation tool that uses Evee-libs.

11:20
Finding Good Proofs for Answers to Conjunctive Queries Mediated by Lightweight Ontologies (Extended Abstract)
PRESENTER: Stefan Borgwardt

ABSTRACT. In ontology-mediated query answering, access to incomplete data sources is mediated by a conceptual layer constituted by an ontology. To correctly compute answers to queries, it is necessary to perform complex reasoning over the constraints expressed by the ontology. In the literature, there exists a multitude of techniques incorporating the ontological knowledge into queries. However, few of these approaches were designed for comprehensibility of the query answers. In this article, we try to bridge these two qualities by adapting a proof framework originally applied to axiom entailment for conjunctive query answering. We investigate the data and combined complexity of determining the existence of a proof below a given quality threshold, which can be measured in different ways. By distinguishing various parameters such as the shape of a query, we obtain an overview of the complexity of this problem for the lightweight ontology languages DL-Lite_R and EL, and also have a brief look at temporal query answering.

This is an extended abstract of a paper accepted at the 35th International Workshop on Description Logics.

11:40
An API for DL Abduction Solvers

ABSTRACT. We propose a unified API for integration of different DL abduction solvers applications, similarly as OWL API does this for deductive OWL/DL reasoners.

12:00
Modular Provenance in Multi-Context Systems
PRESENTER: Matthias Knorr

ABSTRACT. A rapidly increasing amount of data, information and knowledge is becoming available on the Web, often written in different formats and languages, adhering to standardizations driven by the World Wide Web Consortium initiative. Taking advantage of all this heterogeneous knowledge requires its integration for more sophisticated reasoning services and applications. To fully leverage the potential of such systems, their inferences should be accompanied with justifications which allow a user to understand a proposed decision/recommendation, in particular for critical systems (healthcare, law, finances, etc.). However, determining such justifications has commonly only been considered for a single formalism, such as relational databases, description logic ontologies, or declarative rule languages. In this paper, we give an overview on the first approach for providing provenance for heterogeneous knowledge bases building on the general framework of multi-context systems, as an abstract, but very expressive formalism to represent knowledge bases written in different formalisms and the flow of information between them.

12:30-14:00Lunch Break

Lunches will be held in Taub hall and in The Grand Water Research Institute.

14:00-15:30 Session 14R

XLoKR Session 3

Location: Ullmann 202
14:00
On interactive explanations as reasoning

ABSTRACT. Recent work shows issues of consistency with explanations, with methods generating local explanations that seem reasonable instance-wise, but that are inconsistent across instances. This suggests not only that instance-wise explanations can be unreliable, but mainly that, when interacting with a system via multiple inputs, a user may actually lose confidence in the system. To better analyse this issue, in this work we treat explanations as objects that can be subject to reasoning and present a formal model of the interactive scenario between user and system, via sequences of inputs and outputs. We argue that explanations can be thought of in terms of entailment, which, we argue, should be thought of as non-monotonic. This allows: 1) to solve some considered inconsistencies in explanations; 2) to consider properties from the non-monotonic reasoning literature and discuss their desirability, gaining more insight on the interactive explanation scenario.

14:20
Should counterfactual explanations always be data instances?
PRESENTER: Francesca Toni

ABSTRACT. Counterfactual explanations (CEs) are an increasingly popular way of explaining machine learning classifiers. Predominantly, they amount to data instances pointing to potential changes to the inputs that would lead to alternative outputs. In this position paper we question the widespread assumption that CEs should always be data instances, and argue instead that in some cases they may be better understood in terms of special types of relations between input features and classification variables. We illustrate how a special type of these relations, amounting to critical influences, can characterise and guide the search for data instances deemed suitable as CEs. These relations also provide compact indications of which input features - rather than their specific values in data instances - have counterfactual value

14:40
Dialogue-Based Explanations of Reasoning in Rule-based Systems
PRESENTER: Joe Collenette

ABSTRACT. The recent focus on explainable artificial intelligence has been driven by a perception that complex statistical models are opaque to users. Rule-based systems, in contrast, have often been presented as self-explanatory. All the system needs to do is provide a log of its reasoning process and its operations are clear. We believe that such logs are often difficult for users to understand in part because of their size and complexity. We propose dialogue as an explanatory mechanism for rule-based AI systems to allow users and systems to co-create an explanation that focuses on the user’s particular interests or concerns. Our hypothesis is that when a system makes a deduction that was, in some way, unexpected by the user then locating the source of the disagreement or misunderstanding is best achieved through a collaborative dialogue process that allows the participants to gradually isolate the cause. We have implemented a system with this mechanism and performed a user evaluation that shows that in many cases a dialogue is preferred to a reasoning log presented as a tree. These results provide further support for the hypothesis that dialogue explanation could provide a good explanation for a rule-based AI system.

15:00
Clustering-Based Approaches for Symbolic Knowledge Extraction
PRESENTER: Roberta Calegari

ABSTRACT. Opaque models belonging to the machine learning world are ever more exploited in the most different application areas. These models, acting as black boxes (BB) from the human perspective, cannot be entirely trusted if the application is critical, unless there exists a method to extract symbolic and human-readable knowledge out of them.

In this paper we analyse a recurrent design adopted by symbolic knowledge extractors for BB regressors---that is, the creation of rules associated with hypercubic input space regions. We argue that this kind of partitioning may lead to suboptimal solutions when the data set at hand is high-dimensional or does not satisfy symmetric constraints. We then propose a (deep) clustering-based approach to be performed before symbolic knowledge extraction to achieve better performance with data sets of any kind.

15:30-16:00Coffee Break
16:00-18:00 Session 19P

XLoKR Session 4

Location: Ullmann 202
16:00
Invited Talk: Explanation Generation in Applications of Answer Set Programming

ABSTRACT. TBA

17:00
Explaining Soft-Goal Conflicts through Constraint Relaxations
PRESENTER: Rebecca Eifler

ABSTRACT. Recent work suggests to explain trade-offs between soft goals in terms of their conflicts, i. e., minimal unsolvable soft-goal subsets. But this does not explain the conflicts themselves: Why can a given set of soft goals not be jointly achieved? Here we approach that question in terms of the underlying constraints on plans in the task at hand, namely resource availability and time windows. In this context, a natural form of explanation for a soft-goal conflict is a minimal constraint relaxation under which the conflict disappears (“if the deadline was 1 hour later, it would work”). We explore algorithms for computing such explanations. A baseline is to simply loop over all relaxed tasks and compute the conflicts for each separately. We improve over this by two algorithms that leverage information -- conflicts, reachable states -- across relaxed tasks. We show that these algorithms can exponentially outperform the baseline in theory, and we run experiments confirming that advantage in practice.

17:20
Stepwise Explanations of Unsatisfiable Constraint Programs (Extended abstract)
PRESENTER: Ignace Bleukx

ABSTRACT. Unsatisfiable constraint satisfaction models (CSPs) are a common sight when formulating a constraint model. For the modeller, only a handful of tools are available to get an explanation why his model is UNSAT, making it hard to understand the cause of the unsatisfiability and to debug the model. Existing tools rely on extracting a subset of constraints which are in itself not satisfiable (MUS). Although some obvious bugs in the model can be detected easily using such techniques, in other cases the extracted MUS involves too many constraints to understand their dynamics and the cause of unsatisfiability may not be made clear. For explaining solutions of Boolean satisfaction problems, recent years has seen the emerge of stepwise explanations. Such explanation sequences provide justifications for Boolean facts in small steps a user can understand. In this work, we aim to generate a sequence which derives the unsatisfiability of a constraint program. This requires several orthogonal extensions on previous work. In this paper, we extend the notion of explainable facts to integer domains; introduce the concept of simple and non-simple constraints and lasty, we propose an algorithm which can be used to derive the unsatisfiability of a constraint program.