View: session overviewtalk overview
| 11:00 | Short and Simple Explanations for Datalog Derivations ABSTRACT. Simple and short query explanations that detail how a query result is obtained from an input dataset are a powerful way to gain insight into the results of queries. Unfortunately, simple and short explanations for recursive queries are non-trivial: query results of recursive queries typically have a huge set of equally-valid explanations. Toward providing simple and short query explanations, we believe that there is a strong need for an improved understanding of explanations of recursive queries. In this paper, we present such a framework by studying explanations of facts derived using Datalog derivation rules. To enable simple explanations, we present a standard tree-based formal representation of explanations and we formalize four conceptually distinct types of simple explanations based on their tree structure. Next, we show how to effectively compute two types of simple explanations, which opens the door for practical algorithms to explain Datalog query results. Finally, we prove that computing the other two types of simple explanations involves solving NP-complete decision problems and, hence, is inherently hard. |
| 11:45 | Analyzing the behavior of database applications through size descriptions PRESENTER: Emmanuel Waller ABSTRACT. We propose a description language capturing a simple but central aspect of the possible “lives” (the behavior) of a database application, namely the ability for the relations to grow, shrink, or keep the same size. We first consider arbitrary behaviors and investigate the basic properties of such descriptions. We characterize consistency, redundancy, and subsumption, and show that every behavior has a unique minimal complete description. We then consider the problem of computing the minimal complete description of a given database application. We model such applications as collections of procedures, specified by update programs based on relational algebra. The general problem is undecidable, even for an application consisting of a single procedure with a single update language statement. We also identify decidable cases and provide partial complexity characterisations. |
| 14:00 | Axiomatizing variants of approximate inclusion ABSTRACT. We study two approximate variants of inclusion atoms and examine the axiomatization and computational complexity of their implication problems. The approximate variants allow for some imperfection in the team (corresponding to a unirelational database), and differ in how this degree is measured. One considers the error relative to the size of the team, while the other applies a fixed threshold independent of size. We obtain complete axiomatizations for both under some arity restrictions. In particular, restricted to unary atoms, the implication problem for each approximate variant is decidable in PTIME. |
| 14:13 | On Translating Epistemic Operators in a Logic of Awareness ABSTRACT. Awareness-Based Indistinguishability Logic (henceforth, AIL) is an extension of Epistemic Logic by introducing the notion of awareness, distinguishing explicit knowledge from implicit knowledge. In this framework, each of these notions is represented by a modal operator. On the other hand, HMS models, developed in the economics literature, also provide a formalization of those notions. Nevertheless, the behavior of the epistemic operators in AIL within HMS models has yet to be explored. In this paper, we define a transformation of an AIL model into an HMS model and then prove that a translation between the fragments of the language of AIL preserves truth under this transformation. As a result, we clarify the semantic role of an epistemic operator in AIL, which is induced by awareness and is essential to defining explicit knowledge, within HMS models. Furthermore, we demonstrate the differences in the implicit knowledge captured by AIL and HMS models. This work lays the groundwork for a comparative analysis between the model classes. |
| 14:26 | Intermediate Knowledge Problem PRESENTER: Mattias Brännström ABSTRACT. We examine the Intermediate Knowledge Problem (IKP): the challenge of maintaining coherent, goal-aligned intermediate representations during multi-step reasoning. The issue is particularly acute in automated knowledge elicitation, where analysis relies on provisional and revisable artifacts. Using Grounded Theory as a running example, we illustrate why existing symbolic systems and large language model based pipelines provide limited support for such intermediate knowledge, identifying a central obstacle to automating knowledge elicitation and engineering methods. |
| 14:39 | SMT Algorithm Selection with High-Level Natural-Language Descriptions PRESENTER: Florin Manea ABSTRACT. Machine-learning-based algorithm selection has been shown to improve SMT solver performance by exploiting solver complementarity across diverse problem families. Existing SMT algorithm selectors primarily rely on handcrafted syntactic features to represent problem instances for learning models. In this work, we explore whether high-level natural-language descriptions of SMT instances can provide additional useful information. We present SMT-Select, an SMT algorithm selection framework that combines traditional syntactic features with semantic embeddings derived from natural-language descriptions using pretrained transformer models, and incorporates an improved implementation of the algorithm selection pipeline compared to the state-of-the-art selector MachSMT. Evaluated on SMT-COMP 2024 benchmarks across 9 logics, SMT-Select achieves improved performance in 5 logics when description-based features are added, while performance noticeably degrades in one logic, suggesting that effectiveness depends on the underlying logic and the quality of available descriptions. We further outline future work on using LLM-based agents to generate higher-quality SMT instance descriptions, supported by preliminary demos. |
| 14:52 | Towards a VLM-based Foundation for Generalised Neurosymbolic Visual Commonsense ABSTRACT. Commonsense visual sensemaking requires robust mechanisms to extract scene elements and other visual features from perceived imagery, as well as rich conceptual commonsense knowledge and reasoning to interpret the perceived interactive dynamic. In this context, we position ongoing work on using integrated Vision Language Models (VLM) and Answer Set Programming (ASP) based reasoning about (dynamic) spatial configurations using VLMs as a neural foundation for symbolic modeling of dynamic scene structures. We sketch preliminary results highlighting practical usability by application to the task of Visual Question Answering (VQA) with the STRIDE-QA driving dataset. |
| 15:05 | Planning and Temporal Planning by Qualitative Temporal Reasoning ABSTRACT. In this paper we bring together qualitative temporal reasoning and planning in order to pave the way for a new approach to planning and temporal planning. Both fields are concerned with arrangement problems of temporal intervals. We present a method to construct qualitative temporal reasoning tasks from planning problems, effectively reducing a class of planning tasks to qualitative temporal reasoning. The construction proposed in this paper enables additional temporal planning constraints to be considered, thereby making the method applicable to both classical planning and temporal planning. |
| 15:18 | Towards Neuro-Symbolic Classification of Abrasive Wear in Scanning Electron Microscopy ABSTRACT. The analysis of abrasive wear is central to sustainable material design and tool development, yet current practice relies on manual inspection of scanning electron microscopy (SEM) images, limiting scalability and reproducibility. We propose early work towards a neuro-symbolic approach that integrates convolutional neural networks for SEM image segmentation with an expert-elicited taxonomy of wear features encoded in Answer Set Programming. A curated dataset of 400 laboratory and field SEM images with expert-labeled annotations supports interpretable detection of wear mechanisms. This approach aims to reduce the dependency on large datasets, increase interpretability, in automated abrasive wear analysis. The contribution opens the way for scalable and transparent decision processes in tribology, with implications for efficient materials development and extended service life of industrial tools. |