View: session overviewtalk overview
10:40 | Deterministic Weighted Automata under Partial Observability ABSTRACT. Weighted automata is a basic tool for specification in quantitative verification, which allows to express quantitative features of analysed systems such as resource consumption. Quantitative specification can be facilitated by automata learning as there are classic results on Angluin-style learning of weighted automata. The existing work assumes perfect information about the values returned by the target weighted automaton. In assisted synthesis of a quantitative specification, knowledge of the exact values is a strong assumption and may be infeasible. In our work, we address this issue by introducing a new framework of partially-observable deterministic weighted automata, in which weighted automata return intervals containing the computed values of words instead of the exact values. We study the basic properties of this framework with the particular focus on the challenges of active learning partially-observable deterministic weighted automata. |
10:58 | Data Graphs with Incomplete Information (and a Way to Complete Them) PRESENTER: Valentin Cassano ABSTRACT. We introduce a modal language for reasoning about data graphs with incomplete information. Such data graphs are formally represented as models in which data value functions are partial — to capture what is unknown. In this setting, we also allow for unknown data values to be learned. We provide a sound and strongly complete axiomatization. |
11:16 | Splitting Techniques for Conditional Belief Bases in the Context of c-Representations ABSTRACT. Splitting belief bases is fundamental for efficient reasoning and for better understanding interrelationships among the knowledge entities. In this paper, we survey the most important splitting techniques for conditional belief bases in the context of c-representations which constitute a specific class of ranking models with outstanding behavior not only with respect to belief base splitting, as shown in recent papers. We provide a splitting hierarchy, in particular by proving that safe conditional syntax splittings and case splittings are so-called CSP-constraint splittings. We advance the level of knowledge about CSP-constraint splittings and present an algorithm for computing CSP-constraint splittings. |
11:34 | Belief Reconfiguration ABSTRACT. We study a generalisation of iterated belief revision in a setting where we keep track not only of the received information (in the form of messages) but also of the source of each message. We also suppose that we have a special source, the oracle, which never fails. That is, all of the information provided by the oracle is assumed to be correct. We then evaluate the reliability of each source by confronting her messages with the facts given by the oracle. In this case it is natural to give higher priority to messages coming from more reliable sources. We therefore re-order (reconfigurate) the messages with respect to the reliability of the sources before performing the iterated belief revision. We study how to compute this reliability, and the properties of the corresponding reconfiguration operators. |
11:52 | How Easy it is to Know How: An Upper Bound for the Satisfiability Problem PRESENTER: Valentin Cassano ABSTRACT. We investigate the complexity of the satisfiability problem for a modal logic expressing `knowing how' assertions, related to an agent's abilities to achieve a certain goal. We take one of the most standard semantics for this kind of logics based on linear plans. Our main result is a proof that checking satisfiability of a `knowing how' formula can be done in $\Sigma_2^P$. The algorithm we present relies on eliminating nested modalities in a formula, and then performing multiple calls to a satisfiability checking oracle for propositional logic. |
12:10 | Non-standard Modalities in Paraconsistent Gödel Logic PRESENTER: Daniil Kozhemiachenko ABSTRACT. We introduce a paraconsistent expansion of the G\"{o}del logic with a De Morgan negation $\neg$ and modalities $\blacksquare$ and $\blacklozenge$. We dub the logic $\infoGsquare$ and equip it with Kripke semantics on frames with two (possibly fuzzy) relations: $R^+$ and $R^-$ (interpreted as the degree of trust in affirmations and denials by a given source) and valuations $v_1$ and $v_2$ (positive and negative support) ranging over $[0,1]$ and connected via $\neg$. We motivate the semantics of $\blacksquare\phi$ (resp., $\blacklozenge\phi$) as infima (suprema) of both positive and negative supports of $\phi$ in $R^+$- and $R^-$-accessible states, respectively. We then prove several instructive semantical properties of $\infoGsquare$. Finally, we devise a~tableaux system for $\infoGsquare$ over finitely branching frames and establish the complexity of satisfiability and validity. |
12:28 | Base-based Model Checking for Multi-Agent Only Believing ABSTRACT. We present a novel semantics for the language of multi-agent only believing exploiting belief bases, and show how to use it for automatically checking formulas of this language and of its dynamic extension with private belief expansion operators. We provide a PSPACE algorithm for model checking relying on a reduction to QBF and alternative dedicated algorithm relying on the exploration of the state space. We present an implementation of the QBF-based algorithm and some experimental results on computation time in a concrete example. |
16:00 | Efficient Computation of Shap Explanation Scores for Neural Network Classifiers via Knowledge Compilation ABSTRACT. The use of Shap scores has become widespread in Explainable AI. However, their computation is in general intractable, in particular when done with a black-box classifier, such as neural network. Recent research has unveiled classes of open-box Boolean Circuit classifiers for which Shap can be computed efficiently. We show how to transform binary neural networks into those circuits for efficient Shap computation.We use logic-based knowledge compilation techniques. The performance gain is huge, as we show in the light of our experiments. |
16:18 | Short Boolean Formulas as Explanations in Practice ABSTRACT. We investigate explainability via short Boolean formulas in the data model with unary relations. As an explanation of length k, we take a Boolean formula of length k that minimizes the error with respect to the target attribute to be explained. We first provide novel quantitative bounds for the expected error in this scenario. We then also demonstrate how the setting works in practice by studying three concrete data sets. In each case, we calculate explanation formulas of different lengths using an encoding in Answer Set Programming. The most accurate formulas we obtain achieve errors similar to other methods on the same data sets. However, due to overfitting, these formulas are not necessarily ideal explanations, so we use cross validation to identify a suitable length for explanations. By limiting to shorter formulas, we obtain explanations that avoid overfitting but are still reasonably accurate and also, importantly, human interpretable. |
16:36 | Contrastive Explanations for Answer-Set Programs ABSTRACT. Answer-Set Programming (ASP) is a popular declarative reasoning and problem solving formalism. Due to the increasing interest in explainabilty, several explanation approaches have been developed for ASP. However, while those formalisms are correct and interesting on their own, most more technical and less oriented towards philosophical or social concepts of explanation. In this work, we study the notion of contrastive explanation, i.e, answering question of the form ``Why a instead of b?'', in the context of ASP. In particular, we are interested in answering why an atom is included in an answer-set, whereas another is not. Contrastive explainabilty has recently become popular, due to its strong support from the philosophical, cognitive and social sciences and its apparent ability to provide explanations that are concise and intuitive for humans. We formally define contrastive explanations for ASP based on counterfactual reasoning about programs. Furthermore, we demonstrate the usefulness of the concept on example applications and give some complexity results. The latter also provide a guideline as to how the explanations can be computed in practice. |
16:54 | Stable Normative Explanations: From Argumentation to Deontic Logic ABSTRACT. This paper examines how a notion of stable explanation developed elsewhere in Defeasible Logic can be expressed in the context of formal argumentation. With this done, we discuss the deontic meaning of this reconstruction and show how to build from argumentation neighborhood structures for deontic logic where this notion of explanation can be characterised. Some direct complexity results are offered. |
17:07 | Logic, Accountability and Design PRESENTER: Pedro Cabalar ABSTRACT. Logic can and should play a pivotal role in helping to make AI systems explainable and ultimately more accountable. However, this puts a burden of responsibility on logic itself. Can we trust applied log- ical systems, many of them in competition with one another, to provide fair and rational conclusions leading to acceptable decisions and actions? We propose a principled approach to the design of applied logics and a preliminary set of desiderata to support accountability for logic-based systems in knowledge representation and reasoning. |