Download PDFOpen PDF in browser

Irrelevant Explanations: a Logical Formalization and a Case Study

EasyChair Preprint no. 13141

10 pagesDate: April 30, 2024


Explaining the behavior of AI-based tools, whose results may be unexpected even to experts, has become a major request from society and a major concern of AI practitioners and theoreticians. In this position paper we raise two points: (1) \emph{irrelevance} is more amenable to a logical formalization than relevance; (2) since effective explanations must take into account both the context and the receiver of the explanations (called the explainee) so it should be also for the definition of irrelevance. We propose a general, logical framework characterizing context-aware and receiver-aware irrelevance, and provide a case study on an existing tool, based on Semantic Web, that prunes irrelevant parts of an explanation.

Keyphrases: logic, Semantic Web, XAI

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
  author = {Simona Colucci and Francesco M Donini and Tommaso Di Noia and Claudio Pomo and Eugenio Di Sciascio},
  title = {Irrelevant Explanations: a Logical Formalization and a Case Study},
  howpublished = {EasyChair Preprint no. 13141},

  year = {EasyChair, 2024}}
Download PDFOpen PDF in browser