FLOC 2022: FEDERATED LOGIC CONFERENCE 2022
PROGRAM FOR SUNDAY, JULY 31ST
Days:
next day
all days

View: session overviewtalk overview

08:30-09:00Coffee & Refreshments
09:00-10:30 Session 1A (BCORE)
Location: Ullmann 308
09:00
Tutorial: Belief Change, Ontology Repair and Evolution (Part 1)

ABSTRACT. Belief Revision deals with accommodating new information in a knowledge base, where potential inconsistencies may arise. Several solutions have been proposed, turning it into an active field of research from the ’80s. Theoretical results were established in classical propositional logic and became the standard in the area, with rationality postulates, mathematical constructions and representation theorems. More recently, results have been adapted to different knowledge representation formalisms such as Horn Logic and Description Logics.

In this tutorial, I will introduce Belief Revision, starting from the seminal AGM paradigm and giving an overview of the area in the last 35 years. In the second part, I will focus on applying Belief Revision to Description Logics in general and the relation between Revision and Ontology Repair.

The tutorial will benefit both researchers willing to learn about belief revision as those already active in the field and who wish to understand the challenges of applying belief change to description logics and what has been done to address them.

09:00-10:00 Session 1B (DECFOML)
Location: Ullmann 104
09:00
Decidability Questions beyond Satisfiability for First-order Modal Logics

ABSTRACT. In this talk, I will discuss algorithmic problems that go beyond the problem of formula satisfiability for first-order modal logics. Examples include the following questions: is an extension of a theory a conservative extension? Does there exist an explicit definition of a relation (for logics without the Beth Definability Property)? Does there exist a formula separating a set of positive and negative examples? We will motivate these questions and formulate open problems and conjectures.

09:00-10:30 Session 1C: DatalogMTL tutorial, part 1/2 (DatalogMTL)

Tutorial on DatalogMTL (https://datalogmtl.github.io/), part 1/2: motivations, introduction to DatalogMTL, and explanation of reasoning techniques.

Location: Ullmann 300
09:00
DatalogMTL

ABSTRACT. The first part of the DatalogMTL tutorial.

09:00-10:30 Session 1D (FoMLAS)

FoMLAS Session 1

Chair:
Location: Ullmann 201
09:00
Why Robust Natural Language Understanding is a Challenge
PRESENTER: Marco Casadio

ABSTRACT. With the proliferation of Deep Machine Learning into real-life applications, a particular property of this technology has been brought to attention: robustness Neural Networks notoriously present low robustness and can be highly sensitive to small input perturbations. Recently, many methods for verifying networks’ general properties of robustness have been proposed, but they are mostly applied in Computer Vision. In this paper we propose a Verification method for Natural Language Understanding classification based on larger regions of interest, and we discuss the challenges of such task. We observe that, although the data is almost linearly separable, the verifier does not output positive results and we explain the problems and implications.

(Submitted as Extended Abstract)

09:30
An Abstraction-Refinement Approach to Verifying Convolutional Neural Networks
PRESENTER: Matan Ostrovsky

ABSTRACT. Convolutional neural networks have gained vast popularity due to their excellent performance in the fields of computer vision, image processing, and others. Unfortunately, it is now well known that convolutional networks often produce erroneous results --- for example, minor perturbations of the inputs of these networks can result in severe classification errors. Numerous verification approaches have been proposed in recent years to prove the absence of such errors, but these are typically geared for fully connected networks and suffer from exacerbated scalability issues when applied to convolutional networks. To address this gap, we present here the CNN-Abs framework, which is particularly aimed at the verification of convolutional networks. The core of CNN-Abs is an abstraction-refinement technique, which simplifies the verification problem through the removal of convolutional connections in a way that soundly creates an over-approximation of the original problem; and which restores these connections if the resulting problem becomes too abstract. CNN-Abs is designed to use existing verification engines as a backend, and our evaluation demonstrates that it can significantly boost the performance of a state-of-the-art DNN verification engine, reducing runtime by 15.7% on average.

10:00
Neural Networks in Imandra: Matrix Representation as a Verification Choice
PRESENTER: Remi Desmartin

ABSTRACT. The demand for formal verification tools for neural networks has increased as neural networks have been deployed in a growing number of safety-critical applications. Matrices are a data structure essential to formalising neural networks. Functional programming languages encourage diverse approaches to matrix definitions. This feature has already been successfully exploited in different applications. The question we ask is whether, and how, these ideas can be applied in neural network verification. A functional programming language Imandra combines the syntax of a functional programming language and the power of an automated theorem prover. Using these two key features of Imandra, we explore how different implementations of matrices can influence automation of neural network verification.

09:00-10:30 Session 1E (IFIP-WG1.6)
Location: Ullmann 306
09:00
Inversion and Determinization in Term Rewriting

ABSTRACT. As a transformational approach, inversion is formulated in several settings, and the basic ideas are naive and similar. To generate a better form, e.g., non-overlapping systems if possible, determinization is used as a post-process of inversion for injective functions, and is the most challenging part in developing an inversion framework. In this talk, I introduce a framework of inversion and determinization in term rewriting. First, I explain a direct inversion of deterministic CTRSs, and a direct MSV-based determinization. Next, I explain the framework proposed by Gluck and Kawabe, which uses context-free grammars as intermediate representations of CTRSs. Finally, I explain a framework based on context-free expressions.

10:00
RLooP (the Rewriting List of open Problems)
09:00-10:30 Session 1F (LogTeach)
Location: Ullmann 301
09:00
Why and how to teach logic for CS undergraduates
10:00
How to teach logic for CS-undergraduate? Step-by-step!

ABSTRACT. Let us formulate at the beginning who are these CS-undergraduates? Now, to be a pretty good programmer and work in the industry, you don't even need a higher education. Young people can complete their studies in short one-year-courses provided by both large companies and companies specialising in professional education in the field of programming. Moreover, there is a lot of special training for adults (retraining from other professions). In Yandex (both Russian and international parts) we have now a service called Practicum (https://practicum.com/) we have now 6-9 months online trainings in programming and data analysis, which give new IT jobs for thousands of people. This means that the Computer Science specialisation at the university should teach something deeper than programming for the industry. Therefore, we will talk about "a Computer Science specialist with a broad outlook, fundamental training, a strong mathematical apparatus, who knows all the useful mathematical theories applicable to practice".

In general, we want to give to those students knowledge of logic, which: (1) Will give all the theory useful for real applications in Computer Science/Programming/Data analysis. (2) Will give a general understanding of logic, set theory, a general picture of the world (existence of unsolvable problems, for example). (3) Slightly open the "doors" to the sections of logic that are required for theoretical Computer Science.

Logic is very complicated and unclear for most students, especially if they get all the knowledge from logic in a short period of time (they don't have enough time to realise and recognise it). At the same time, at different stages of study, both for general understanding and for practice, different sections of logic are required. These sections could even not be connected to each other. Therefore, the proposal is to divide the entire logic course into several (even 4-5) parts and give out one part per semester (it can be 5-7 lectures/ seminars). Variants of parts: (1) Foundations of Mathematics; (2) Foundations of Programming; (3) Type Theory and Machine Proofs…

Thinking about teaching process we suppose that (1) It is necessary to accompany theory with practice so that students could touch it physically (write code, count, apply to specific models). (2) It is not necessary to overload the theory, but it is important to show some logic culture - paradoxes, interesting tasks, analogies, etc. (3) Let every topic be settled down, check understanding of previous parts!

09:00-09:40 Session 1G: Invited talk -- dr. Ruth Hoffmann (ModRef)
Location: Ullmann 310
09:00
Constraint modelling and solving: Learning from observing people

ABSTRACT. Research in constraint programming typically focuses on problem-solving efficiency. However, the way users conceptualise problems and communicate with constraint programming tools is often sidelined.How humans think about constraint problems can be important for the development of efficient tools and representations that are useful to a broader audience. Although many visual programming languages exist, as an alternate for textual representation, for procedural languages, visual encoding of problem specifications has not received much attention. Thus, visualization could provide an alternative way to model and understand such problems. We present an initial step towards better understanding of the human side of the constraint modelling and solving process. We executed a study that catalogs how people approach three parts of constraint programming (1) the problem representation, (2) the problem-solving, and (3) the programming of a solver. To our knowledge, this is the first human-centred study addressing how people approach constraint modelling and solving.We studied three groups with different expertise: non-computer scientists, computer scientists and constraint programmers and analyzed their marks on paper (e.g., arrows), gestures (e.g., pointing), the mappings to problem concepts (e.g., containers, sets) and any strategies and explanations that they provided. We will discuss results and future research this study will hopefully inspire.

09:00-09:50 Session 1I (VardiFest)
Location: Taub 1
09:00
On the unusual effectiveness of automata in logic

ABSTRACT. This talk intends to take the listener along a path weaving a picture from threads of science, history and personal details. It is about connections, connections between theories, between people, between names and places. It is about the beauty and effectiveness of theories and of the individual random walks from which they emerge.

09:00-10:30 Session 1J (WiL)
Location: Ullmann 307
09:00
Embedding Quantitative Properties of Call-by-Name and Call-by-Value into Call-by-Push-Value

ABSTRACT. This talks explores how the (untyped and typed) theories of Call-by-Name (CBN) and Call-by-Value (CBV) can be unified in a more general framework provided by Call-by-Push-Value (CBPV).Indeed, we first present an untyped CBPV-like calculus, called lambda-bang, which encodes untyped CNB and CBV, both from a static anda dynamic point of view.  We then explore these encodings in a typed framework, specified by quantitative (aka non-idempotent intersection) types.  Three different (quantitative) properties are discussed. The first one is related to upper bounds for reduction lengths, the second one concerns exact measures for reduction lengths and sizes of normal forms, and the last one is about the inhabitation problem. In all these cases, explained and discussed in the talk, the (quantitative) property for CBN/CBV is inherited from the corresponding one in the lambda-bang calculus.

10:00
The essence of type-theoretic elaboration

ABSTRACT. When using type theories in proof assistants the full syntax can quickly become too verbose to handle. One common solution to this problem is to design two type theories: a fully annotated type theory which has good meta-theoretic properties, is suitable for algorithmic processing and resides in the kernel of the proof assistant, and an economic one for the users’ input. The two theories are linked by elaboration, a reconstruction of missing information that happens during, or in parallel with, type-checking. We define a type-theoretic account of an elaboration map and relate its algorithmic properties to decidability of type-checking.

09:00-10:30 Session 1K (XLoKR)

XLoKR Session 1

Location: Ullmann 202
09:00
Invited Talk: Explaining and Arguing with Facts and Rules

ABSTRACT. Many existing techniques in Explainable AI, perhaps most of them, are based on relatively simple statistical summaries that capture the behavior of large classifiers. However, explanations often require more refined reasoning, sometimes in contexts that demand quick responses. To illustrate such scenarios, we start this talk by looking at explanations within recommendation systems, in particular examining the value of knowledge graphs in producing arguments for and against recommended items. We then move to techniques that explain the links stored in a knowledge graph themselves, discussing ways to combine data-driven embeddings with rule-based reasoning. And from there we move to explanations that rely on arguments extracted from facts and rules, sometimes obtained through text processing. We examine formalisms that allow for probabilistic argumentation in the presence of negation and cyclic defeating paths, as those seem to offer the necessary foundation for such explanation settings. The connection with various logical formalisms can then be explored to produce results on complexity and expressivity.

10:00
Using Abstraction for Interpretable Robot Programs in Stochastic Domains
PRESENTER: Till Hofmann

ABSTRACT. A robot's actions are inherently stochastic, as its sensors are noisy and its actions do not always have the intended effects. For this reason, the agent language Golog has been extended to models with degrees of belief and stochastic actions. While this allows more precise robot models, the resulting programs are much harder to comprehend, because they need to deal with the noise, e.g., by looping until some desired state has been reached with certainty, and because the resulting action traces consist of a large number of actions cluttered with sensor noise. To alleviate these issues, we propose to use abstraction. We define a high-level and nonstochastic model of the robot and then map the high-level model into the lower-level stochastic model. The resulting programs are much easier to understand, often do not require belief operators or loops, and produce much shorter action traces.

09:15-10:30 Session 2: Teaching logic (FOMEO)

5 talks, each roughly 10 minutes plus 5 minutes for discussion and questions.

Location: Ullmann 205
09:15
First-Order Logic Workbook
PRESENTER: Ján Kľuka

ABSTRACT. We briefly describe the Logic Workbook project, the culmination of our 5-year long effort to develop a suite of tools aimed at fostering independent active learning in our undergraduate course on first-order logic. It supports students solving sets of problems of multiple kinds (modelling, proving, model building) by integrating our previously stand-alone client-side web apps.

09:30
Towards Graphical Feedback in the DiMo Learning Tool
PRESENTER: Maurice Herwig

ABSTRACT. In this talk we will present recent efforts on extending the DiMo specification language and the tool's capabilities for providing problem-specific (graphical) output. The tool is already present in FOMEO's list of presented tools.

09:45
Iltis: Teaching Logic in the Web

ABSTRACT. The Iltis project provides an interactive, web-based system for teaching the foundations of formal methods. It is designed to allow modular addition of educational tasks as well as to provide immediate and comprehensive feedback. Currently, exercises for various aspects of typical automated reasoning workflows for propositional logic, modal logic, and first-order logic are covered. Recently, Iltis has reached a level of maturity where large parts of introductory logic courses can be supplemented with interactive exercises. We are now working on integrating Iltis with course management platforms to enable teachers as well as students to monitor their progress. Sample interactive course material has been designed and used in such logic courses with more than 200 students. We invite all readers to give it a try! This is joint work with Gaetano Geck, Marko Schmellenkamp, Jonas Schmidt and Thomas Zeume.

10:00
Experiences on Online Teaching of Formal Methods to Systems Engineering Students

ABSTRACT. Clear, precise and focused communication is one of the key components of education. Formal methods provide a way to achieve all of these goals. Hence, we introduced formal methods in the teaching of the course ``Systems Engineering 1'' as part of the International Master's program of Distributed Systems Engineering (DSE) at TU Dresden in Winter Semester 2019/20. In the same semester, we initiated the concept of ``research-based teaching'' to integrate our research in the teaching by adding exercise on attestation process in the emerging field of confidential computing. In the same semester, teaching was shifted to online format due to COVID19. Altogether in various courses of DSE, there are currently 12 complete exercises (each of 90 minutes) on remote attestation and ProVerif based on the research carried out at the chair. Considerable effort was required to design these exercises, as DSE students typically do not have familiarity with formal methods. To the best of our knowledge (and confirmed by the developers of ProVerif), this is the first initiative to introduce ProVerif to such students.

10:15
DOMtutor -- Competitive Programming as Teaching Tool

ABSTRACT. DOMtutor is a set of scripts and problems built for DOMjudge, an evaluation tool used for competitive programming.

09:30-10:30 Session 3A (ASL)

Keynote Talk 1

Location: (virtual only)
09:30
Exact Separation Logic

ABSTRACT. Over-approximating (OX) program logics, such as separation logic (SL),are used to verify properties of heap-manipulating programs: allterminating behaviour is characterised, but the reported results anderrors need not be reachable. OX function specifications are thusincompatible with true bug-finding supported by symbolic executiontools such as Pulse and Gillian. In contrast, under-approximating (UX)program logics, such as incorrectness separation logic (ISL), are usedto find true results and bugs: reported results and errors arereachable, but not all behaviour can be characterised. UX functionspecifications thus cannot capture full verification.We introduce exact separation logic (ESL) for reasoning aboutheap-manipulating sequential programs, which provides fully verifiedfunction specifications compatible with true bug finding: allterminating behaviour is captured in full, and all reported resultsand errors are reachable.  ESL supports mutually recursive functions,which require subtle definitions of internal and external functionspecifications compared with the familiar definitions for OX logics.We prove a frame-preserving soundness result for ESL, and ISL and SL,thus demonstrating functional compositionality of UX reasoning. Weverify exact specifications of list algorithms using standardinductive predicates, observing the difference between abstractionwhich hides information and OX reasoning which looses information.  Weintroduce a symbolic execution semantics that can call functions withESL specifications, and prove a backwards completeness result thatdemonstrates that verified ESL function specifications are indeedcompatible with true bug-finding.

09:30-10:30 Session 3B: Invited Talk (WPTE)
Location: Ullmann 305
09:30
Transforming Text to (and from) XML

ABSTRACT. In industry, there is a critical need for complying with laws, standards, and guidelines. However, there are hundreds of (versions of) international and regional standards and guidelines, and each of them has hundreds of clauses, referring to each other. In AIST, we are developing an XML-based framework to support human developers to navigate the vast amount of documents.

In this talk, I will introduce a Text-to-XML structuring tool TXtruct, a core ingredient of the aforementioned framework.

TXtruct is a language that can describe a text or XML grammar, and at the same time can describe a transformer. Despite the strength, the TXtruct interpreter is implemented in an elegant and minimalistic way, based on a string-diagrammatic semantics. We will discuss some important properties such as termination, productivity, and invertibility.

We will also explore the potential applications in the Termination Competitions and Confluence Competitions, where thousands of term rewrite systems are given in a different formats.

09:40-10:30 Session 4: Paper presentations (ModRef)
Location: Ullmann 310
09:40
Solving XCSP3 constraint problems using tools from software verification

ABSTRACT. Since 2012, the Software Verification Competition (SV-COMP) has evaluated the performance of a wide range of software verification tools, using a wide range of techniques and representations. These include SAT/SMT solving, abstract interpretation, Constrained Horn Clause solving and finite state automata.

The XCSP3 constraint format serves as an intermediate language for constraint problems. When trying to solve a particular problem, one often wishes to trial a range of techniques to see which one is initially most promising. Unfortunately, there might not always be an implementation of the desired technique for one's formulation.

We propose to address this problem by encoding XCSP3 problems as C program software verification problems. This grants the ability to trial all techniques implemented in SV-COMP entries. We should not expect the encoding to be efficient, but it may be sufficient to identify the most promising techniques.

10:05
Constraint-based Part-of-Speech Tagging

ABSTRACT. This paper describes a constraint-based part-of-speech (POS) tagger, named CPOST, which treats POS tagging as a constraint satisfaction problem (CSP). CPOST treats each word as a variable, uses a lexicon to determine the domains of variables, employs context constraints to reduce ambiguity, and utilizes statistical models to label variables with values. This paper shows that, with a small number of context constraints that encode some of the basic linguistic knowledge, CPOST significantly enhances the precision at identifying base-form verbs, and mitigates the burden on syntax parsing.

09:50-10:30 Session 5 (VardiFest)
Location: Taub 1
09:50
Data Path Queries over Embedded Graph Databases

ABSTRACT. Moshe Vardi is a highly inspiring computer scientist, who has made groundbreaking contributions in the last four decades to logic and automata, as well as its diverse applications in computer science, including formal verification, databases, computational complexity, knowledge reasoning, to name a few. Moshe's research has personally inspired me to conduct research at the border of different areas in computer science, glued together by logic and automata. I would like to dedicate my recent work joint with Diego Figueira and Artur J\.{e}z, which is to appear in PODS'22, to Moshe. The work combines a number of different areas --- finite model theory, graph databases, automata theory, and SMT --- to all of which, Moshe has made significant contributions.

We initiate the study of data-path query languages (in particular, regular data path queries (RDPQ) and conjunctive RDPQ (CRDPQ)) in the classic setting of embedded finite model theory, wherein each graph is ``embedded'' into a background infinite structure (with a decidable FO theory or fragments thereof). Our goal is to address the current lack of support for typed attribute data (e.g.\ integer arithmetics) in existing data-path query languages, which are crucial in practice. We propose an extension of register automata by allowing powerful constraints over the theory and the database as guards, and having two types of registers: registers that can store values from the active domain, and read-only registers that can store arbitrary values. We prove NL data complexity for (C)RDPQ over various standard theories in SMT; namely, the Presburger arithmetic, the real-closed field, the existential theory of automatic structures and word equations with regular constraints. All these results strictly extend the known NL data complexity of RDPQ with only equality comparisons, and provides an answer to a recent open problem posed by Libkin et al. Among others, we introduce one crucial proof technique for obtaining NL data complexity for data path queries over embedded graph databases called ``Restricted Register Collapse (RRC)'', inspired by the notion of Restricted Quantifier Collapse (RQC) in embedded finite model theory.

10:00
LTLf Synthesis as AND-OR Graph Search: Knowledge Compilation at Work

ABSTRACT. Synthesis techniques for temporal logic specifications are typically based on exploiting symbolic techniques, as done in model checking. These symbolic techniques typically use backward fixpoint computation. Planning, which can be seen as a specific form of synthesis, is a witness of the success of forward search approaches. In this talk, we present a forward-search approach to full-fledged Linear Temporal Logic on finite traces (LTLf) synthesis. We show how to compute the Deterministic Finite Automaton (DFA) of an LTLf formula on-the-fly, while performing an adversarial forward search towards the final states, by considering the DFA as a sort of AND-OR graph. Our approach is characterized by branching on suitable propositional formulas, instead of individual evaluations, hence radically reducing the branching factor of the search space. Specifically, we take advantage of techniques developed for knowledge compilation, such as Sentential Decision Diagrams (SDDs), to implement the approach efficiently.

10:10
Towards Combination of Logic and Calculus for Near-Optimal Planning in Relational Hybrid Systems

ABSTRACT. The main Web page of FLoC-2022 includes a rephrase of the opening sentence of the Aaron Bradley and Zohar Manna's book: "Logic is the calculus of computation". This phrase naturally leads to the question if there is a language with expressive syntax and semantics that can combine the strength of logic and calculus, and if there are reasoning mechanisms for solving practical problems formulated in that language ? It turns out that the answer is affirmative, and combinations of this kind have been already proposed. But I would like to argue they were not ambitious enough, and not motivated enough by practical problems outside of CS. There is a lot more that can be done. In particular, M.Vardi and his colleagues studied aspects of how to effectively integrate Task and Motion Planning (TAMP), a research direction that is explored by several research groups. This research direction is related to planning and control of hybrid systems. The latter are called hybrid because they include both discrete parameterized actions and continuous parameterized processes that can be initiated and terminated by the agent actions or by external events. It turns out, that there are interesting classes of hybrid systems, where the states have relation structure as in databases. I would like to argue that a recently developed Hybrid Temporal Situation Calculus provides a language for a combination of logic and calculus that can be useful for representation and reasoning about relational hybrid systems. This language develops the situation calculus of John McCarthy and Ray Reiter, in particular, sequential temporal situation calculus of Reiter, towards including ordinary differential equations to compute continuous change over time. Also, I would like to argue that constraint logic programming (CLP) leads naturally to reasoning algorithms that can find near-optimal solutions for the planning problem in relational hybrid systems. The computational problems of this kind are usually solved using the methods of the Mixed-Integer Non-Linear Programming (MINLP) by a large community. However, I formulate a hypothesis that logic and reasoning algorithms combined with calculus and strengthened by external NLP solvers can be better suited not only to the task of modelling relational hybrid systems, but also to the task of computing near-optimal plans more efficiently, and, moreover, for the larger scale system than are currently feasible with MINLP alone. I conclude with an invitation to explore more a new direction that can demonstrate "Not So Unusual Effectiveness of Logic" from the angle that was little explored before.

10:20
SAT-based Reasoning Techniques for LTL over Finite and Infinite Traces

ABSTRACT. In this talk, I am going to present the results of SAT-based reasoning for LTL over finite and infinite traces, the crux of which is to utilize the SAT solver to compute the states of the automaton w.r.t. an LTL (or LTLf ) formula on the fly. The series of work is accomplished by working with Moshe since 2014 and partially working with Kristin as well. Up to now, we have several applications based on this framework, including LTL satisfiability checking [3, 4], LTLf satisfiability checking [1, 2], and LTLf synthesis [5]. I will show such applications in a high-level way and presents the state-of-the-art results we have obtained. Finally, I will also discuss the possible research directions that are worth exploring further, e.g., on-the-fly LTL model checking, LTL synthesis and etc.

10:00-10:30 Session 6 (DECFOML)
Location: Ullmann 104
10:00
Are Bundles Good Deals for FOML?

ABSTRACT. Bundled products are often offered as good deals to customers. When we bundle quantifiers and modalities together (as in ∃x◻, ◊∀x etc.) in first-order modal logic (FOML), we get new logical operators whose combinations produce interesting fragments of FOML without any restriction on the arity of predicates, the number of variables, or the modal scope. It is well-known that finding decidable fragments of FOML is hard, so we may ask: do bundled fragments that exploit the distinct expressivity of FOML constitute good deals in balancing the expressivity and complexity? There are a few positive earlier results on some particular fragments. In this paper, we try to fully map the terrain of bundled fragments of FOML in (un)decidability, and in the cases without a definite answer yet, we show that they lack the finite model property. Moreover, whether the logics are interpreted over constant domains (across states/worlds) or increasing domains presents another layer of complexity. We also present the \textit{loosely bundled fragment}, which generalizes the bundles and yet retain decidability (over increasing domain models). (Joint work with Anantha Padmanabha, R. Ramanujam, and Yanjing Wang)

10:30-11:00Coffee Break
10:30-12:00 Session 7 (ASL)

Theories of Separation

Location: (virtual only)
10:30
Two Results on Separation Logic With Theory Reasoning
PRESENTER: Nicolas Peltier

ABSTRACT. Two results are presented concerning the entailment problem in Separation Logic with inductively defined predicate symbols and theory reasoning. First, we show that the entailment problem is undecidable for rules satisfying the progress, connectivity and establishment conditions, if theory reasoning is considered. The result holds for a wide class of theories, even with a very low expressive power. For instance it applies to the natural numbers with the successor function, or with the usual order. Second, we show that every entailment problem can be reduced to an entailment problem containing no equality (neither in the formulas nor in the recursive rules defining the semantics of the predicate symbols).

11:00
Labelled Tableaux for Linear Time Bunched Implication Logic
PRESENTER: Daniel Mery

ABSTRACT. In this paper, we define a temporal extension of Bunched Implication Logic (BI), called LTBI, that combines the LTL temporal connectives with BI connectives. After studying its semantics and illustrating its expressiveness, we design a labelled tableaux calculus for this new logic and then prove its soundness w.r.t. the Kripke-style semantics of LTBI. We finally discuss completeness issues of the calculus.

11:30
On the Expressiveness of a Logic of Separated Relations
PRESENTER: Radu Iosif

ABSTRACT. We compare the model-theoretic expressiveness of the symbolic heap fragment of Separation Logic (SL) with inductive definitions with (Monadic) Second Order Logic ((M)SO). While SL and MSO are incomparable on structures of unbounded treewidth, it turns out that SL can be embedded in SO and that MSO becomes a strict subset of SL, when the treewidth of the models is bounded by a constant and the relation symbols have arity one or two. We leave open the problems of equivalence between (M)SO and SL on bounded treewidth models over signatures containing relation symbols of arity three or more.

10:45-12:30 Session 9: Extensions of ASP (ASPOCP)

Extensions of ASP

Location: Ullmann 311
10:45
Constrained Default Logic Programming
PRESENTER: Shutao Zhang

ABSTRACT. This paper develops a new formalism CDLP by combining ASP and constrained default logic to facilitate modeling questions with incomplete information, such that both Reiter's defaults and constraint defaults can be represented directly. After that, a method to split the CDLP programs is proposed by extending the notion of splitting sets to accommodate its property of constraint nonmonotonicity. Finally, we design a primary algorithm for solving the CDLP programs.

11:05
On the generalization of learnt constraints in ASP solving for temporal domains
PRESENTER: Klaus Strauch

ABSTRACT. The representation of a dynamic problem in ASP usually boils down to using copies of variables and constraints, one for each time stamp, no matter whether it is directly encoded or via an action or temporal language. The multiplication of variables and constraints is commonly done during grounding and the solver is completely ignorant about the temporal relationship among the different instances. On the other hand, a key factor in the performance of today's ASP solvers is conflict-driven constraint learning. Our question is now whether a constraint learned for particular time steps can be generalized and reused at other time stamps, and ultimately whether this enhances the overall solver performance on dynamic problems. Knowing full well the domain of time, we study conditions under which learned dynamic constraints can be generalized and propose a simple translation of the original logic program such that, for the translated programs, all learned constraints can be generalized to other time points. Last but not least, we empirically evaluate the impact of adding the generalized constraints to the ASP solver.

11:25
Conflict Handling in Product Configuration using Answer Set Programming
PRESENTER: Konstantin Herud

ABSTRACT. Product configuration is one of the most important industrial applications of artificial intelligence. To enable customers to individualize complex products, usually logical configuration models are necessary. One challenge that exists here is the communication of product knowledge to the customer. This work deals with the situation of invalid customer requirements under given logical constraints. The goal is to explain why configurations are invalid and how they can be minimally changed to become valid again. This involves computing all minimal subsets of both constraints and requirements that make the configuration unsatisfiable. For this, we present a novel approach based on the declarative paradigm of Answer Set Programming. We empirically demonstrate its suitability for real-time configurators with thousands of features and constraints. The approach thus fills the gap of an easy-to-implement as well as high-performing conflict resolution component.

11:45
Hard Variants of Stable Marriage Problems: An Empirical Comparison of ASP with Other Computing Paradigms
11:00-12:30 Session 10A (BCORE)
Location: Ullmann 308
11:00
Tutorial: Belief Change, Ontology Repair and Evolution (Part 2)

ABSTRACT. Belief Revision deals with accommodating new information in a knowledge base, where potential inconsistencies may arise. Several solutions have been proposed, turning it into an active field of research from the ’80s. Theoretical results were established in classical propositional logic and became the standard in the area, with rationality postulates, mathematical constructions and representation theorems. More recently, results have been adapted to different knowledge representation formalisms such as Horn Logic and Description Logics.

In this tutorial, I will introduce Belief Revision, starting from the seminal AGM paradigm and giving an overview of the area in the last 35 years. In the second part, I will focus on applying Belief Revision to Description Logics in general and the relation between Revision and Ontology Repair.

The tutorial will benefit both researchers willing to learn about belief revision as those already active in the field and who wish to understand the challenges of applying belief change to description logics and what has been done to address them.

11:00-12:00 Session 10B (DECFOML)
Location: Ullmann 104
11:00
Terminating sequent calculi for decidable fragments of FOML

ABSTRACT. In the last few years some fragments of FOML have been shown to be decidable—e.g., the one-variable, the monodic, and some guarded fragments are decidable. Most proofs of decidability deal with the satisfiability problem and use model-theoretic techniques. This talk presents some preliminary results on terminating sequent calculi allowing to give a proof-theoretic proof of decidability for the validity problem in some fragments of FOML. This is joint work with Sara Negri (Genova) and Matteo Tesi (SNS, Pisa).

11:00-12:30 Session 10C: DatalogMTL tutorial, part 2/2 (DatalogMTL)

Tutorial on DatalogMTL (https://datalogmtl.github.io/), part 2/2: reasoning systems (MeTeoR and Ontop), demos, and conclusions.

Location: Ullmann 300
11:00
DatalogMTL

ABSTRACT. The second part of the DatalogMTL tutorial.

11:00-12:30 Session 10D: Teaching formal methods (FOMEO)

5 talks, each roughly 10 minutes plus 5 minutes for discussion and questions.

In the end, we have an additional 15 minutes for discussion, talks running a bit longer or getting to the buffet early.

We kindly ask presenters from the first two sessions to be in the gather in the final 15 minutes so that remote participants can ask questions and discuss with them.

Location: Ullmann 205
11:00
Tableaunoir: an online blackboard for teaching formal methods and making slides

ABSTRACT. Tableaunoir was already presented at FOMEO 21. Tableaunoir is an online collaborative blackboard tool with fridge magnets available in many languages. The main raison d'être of Tableaunoir is the use of fridge magnets, to make interactive animations. The tool provides magnets for teaching algorithms, automata theory and games. The new feature that will be presented at FOMEO~22 is making slides (presentation).

11:15
Teaching Concurrency with pseuCo Book

ABSTRACT. This presentation showcases pseuCo Book, an interactive textbook for concurrent programming, and its use in the \emph{Concurrent Programming} lecture at Saarland University. In contrast to many other tools that have a narrow scope and lack context, or only work as add-ons to textbooks and other course materials, pseuCo Book provides a truly interactive textbook experience where interactive demonstrations and exercises are interwoven with more classical textual elements. PseuCo Book is a work-in-progress and currently has two chapters, one on notions of equality like trace equivalence, bisimilarity, and observation congruence, and one focussing on the fundamentals of practical concurrent programming with message-passing or shared-memory concurrency.

11:30
Teaching Formal Languages with Iltis - A Preview

ABSTRACT. The Iltis project provides an interactive, web-based system for teaching the foundations of formal methods. This abstract gives a preview on Iltis' support of exercises on formal languages.

11:45
Automata Tutor

ABSTRACT. Name: Automata Tutor Link: automatatutor.com Area: Formal languages Content: Exercises on DFA, NFA, RE, PDA and CFG (understanding, construction, conversions and further algorithms) Usage: Register online - Become instructor - Create your own exercises for your course

12:00
Visualization of structural operational semantics over programs of simple imperative language.
PRESENTER: Ján Perháč

ABSTRACT. Our research team “Theory of programming”, at the Faculty of Electrical engineering and informatics, Technical University of Košice, Slovakia is interested in theoretical informatics. Our research and teaching are dedicated to formal methods and systems. Currently, we teach the following subjects: Formal languages, Logic for informaticians, Type theory, and Semantics of programming languages. We use traditional (mathematical) methods of teaching using blackboards in a class. The pandemic of Covid 19 has made our teaching complicated. We had to adapt and improvise. We have changed a blackboard in a class to a graphical tablet. But there was still missing a student-teacher interaction when for example a student came after class for a consultation to check if they understand a topic correctly, or if their calculations are correct, and so on. Therefore, we have decided to develop web tools that will simulate calculations as they would be done on a blackboard. The first implemented method is the Structural operational semantics (SOS) over a simple imperative language with the following syntax: • e ::= n | x | e + e | e - e | e * e • b ::= true | false | e = e | e ≤ e | ¬ b | b ∧ b • S ::= x := e | skip | S; S | if b then S else S | while b do S Our tool is developed as a web application, therefore is quickly accessible, with a simple UI. User type a program in our language and a tool will provide a calculation with the following options: • The whole sequence – will display the whole solution in one step. • Step by step – will display the calculations be each configuration of the program’s SOS. Here is also a possibility to interactively guess the next configuration. • Single instruction - provides a simple interface for sliding between configurations. • There are also possibilities for uploading the program from a file, exporting results in a PNG file, showing “help” with the user manual, setting other parameters of UI and input editor (such as color, text size, etc.), and many others.

11:00-12:30 Session 10E (FoMLAS)

FoMLAS Session 2

Chair:
Location: Ullmann 201
11:00
Tight Neural Network Verification via Semidefinite Relaxations and Linear Reformulations
PRESENTER: Jianglin Lan

ABSTRACT. We present a novel semidefinite programming (SDP) relaxation that enables tight and efficient verification of neural networks. The tightness is achieved by combining SDP relaxations with valid linear cuts, constructed by using the reformulation-linearisation technique (RLT). The computational efficiency results from a layerwise SDP formulation and an iterative algorithm for incrementally adding RLT-generated linear cuts to the verification formulation. The layer RLT-SDP relaxation here presented is shown to produce the tightest SDP relaxation for ReLU neural networks available in the literature. We report experimental results based on MNIST neural networks showing that the method outperforms the state-of-the-art methods while maintaining acceptable computational overheads. For networks of approximately 10k nodes (1k, respectively), the proposed method achieved an improvement in the ratio of certified robustness cases from 0% to 82% (from 35% to 70%, respectively).

11:30
A Cascade of Checkers for Run-time Certification of Local Robustness
PRESENTER: Ravi Mangal

ABSTRACT. Neural networks are known to be susceptible to adversarial examples. Different techniques have been proposed in the literature to address the problem, ranging from adversarial training with robustness guarantees to post-training and run-time certification of local robustness using either inexpensive but incomplete verification or sound, complete, but expensive constraint solving. We advocate for the use of a run- time cascade of over-approximate, under-approximate, and exact local robustness checkers. The exact check in the cascade ensures that no unnecessary alarms are raised, an important requirement for autonomous systems where resorting to fail-safe mechanisms is highly undesirable. Though exact checks are expensive, via two case studies, we demonstrate that the exact check in a cascade is rarely invoked in practice.

12:00
RoMA: a Method for Neural Network Robustness Measurement and Assessment
PRESENTER: Natan Levy

ABSTRACT. Neural network models have become the leading solution for a large variety of tasks, such as classification, language processing, and others. However, their reliability is heavily plagued by adversarial inputs: inputs generated by adding tiny perturbations to correctly-classified inputs, and for which the neural network produces erroneous results. In this paper, we present a new method called Robustness Measurement and Assessment (RoMA), which measures the robustness of a neural network model against such adversarial inputs. Specifically, RoMA determines the probability that a random input perturbation might cause misclassification. The method allows us to provide formal guarantees regarding the expected frequency of errors that a trained model will encounter after deployment. The type of robustness assessment afforded by RoMA is inspired by state-of-the-art certification practices, and could constitute an important step towards integrating neural networks in safety-critical systems.

11:00-12:30 Session 10F (HoTT/UF)
Location: Ullmann 303
11:00
TBA
12:00
Elementary Simplicial Collapses in Cubical Agda
11:00-12:30 Session 10G (IFIP-WG1.6)
Location: Ullmann 306
11:00
Quantitative Weak Linearisation

ABSTRACT. Weak linearisation was defined years ago through a static characterisation of the intuitive notion of virtual redex, based on (legal) paths computed from the (syntactical) term tree. Weak-linear terms impose a linearity condition only on functions that are applied (consumed by reduction) and functions that are not applied (therefore persist in the term along any reduction) can be non-linear. This class of terms was shown to be strongly normalising with deciding typability in polynomial time. We revisit this notion through non-idempotent intersection types (also called quantitative types). By using an effective characterisation of minimal typings, based on the notion of tightness, we are able to distinguish between “consumed” and “persistent” term constructors, which allows us to define an expansion relation, between general lambda-terms and weak-linear lambda-terms, whilst preserving normal forms by reduction.

12:00
The International School on Rewriting

ABSTRACT. This slot contains an advertisement for ISR in Tbilisi, a proposal and vote for ISR 2024, and perhaps a general discussion on the future of ISR.

11:00-12:30 Session 10H: learning interpretable ML models (LMML)
Location: Ullmann 101
11:00
Building Optimal Decision Trees

ABSTRACT.  

  

Decision tree learning is a widely used approach in machine learning, favoured in applications that require concise and interpretable models. Heuristic methods are traditionally used to quickly produce models with reasonably high accuracy. A commonly criticised point, however, is that the resulting trees may not necessarily be the best representation of the data in terms of accuracy and size. In recent years, this motivated the development of optimal classification tree algorithms that globally optimise the decision tree in contrast to heuristic methods that perform a sequence of locally optimal decisions.

In this talk I will explore the history of building decision trees, from greedy heuristic methods to modern optimal approaches.

In particular I will discuss a novel algorithm for learning optimal classification trees based on dynamic programming and search. Our algorithm supports constraints on the depth of the tree and number of nodes. The success of our approach is attributed to a series of specialised techniques that exploit properties unique to classification trees. Whereas algorithms for optimal classification trees have traditionally been plagued by high runtimes and limited scalability, we show in a detailed experimental study that our approach uses only a fraction of the time required by the state-of-the-art and can handle datasets with tens of thousands of instances, providing several orders of magnitude improvements and notably contributing towards the practical realisation of optimal decision trees.

I will briefly consider what remains to explore in creating optimal decision trees.

11:45
SAT-Based Induction of Explainable Decision Trees

ABSTRACT. Decision trees are one of the most interpretable machine learning models and due to this property have gained increased attention in the past years. In this context, decision trees of small size and depth are of particular interest and new methods have been developed that can induce decision trees that are both small and accurate.

In this talk, I will give an overview of SAT-based methods that have been proposed, starting with general encodings for inducing decision trees and then discussing options for scaling them to larger instances in a heuristic manner. I will give particular focus to our algorithm DT-SLIM, which reduces the size and depth of a given decision tree by using a SAT solver to repeatedly improve localized parts of the tree.

Joint work with Stefan Szeider.

11:00-12:30 Session 10I (LogTeach)
Location: Ullmann 301
11:00
Position Paper: Mathematical Logic through Python
PRESENTER: Noam Nisan

ABSTRACT. We briefly motivate and describe our new book, "Mathematical Logic through Python", which is forthcoming (2022) in Cambridge University Press.

12:00
Whom to teach logic?

ABSTRACT. The purpose of the workshop is to discuss whether and how we should teach logic to computer science students. However, the perspectives of teaching logic to ``pure math'' students are also not brilliant. In both cases mandatory courses vanish from the curriculum, and non-mandatory courses attract not so many students. So I change the focus a little bit: what audience should we teach logic?

My answer is that we should mix interested math and CS students, and teach those who need logic together.

Another question is when to teach logic. I will argue that different parts of logic must be taught in different parts of the curriculum.

11:00-12:30 Session 10J: Paper presentations (ModRef)
Location: Ullmann 310
11:00
A portfolio-based analysis method for competition results

ABSTRACT. Competitions such as the MiniZinc Challenges or the SAT competitions have been very useful sources for comparing performance of different solving approaches and for advancing the state-of-the-arts of the fields. Traditional competition setting often focuses on producing a ranking between solvers based on their average performance across a wide range of benchmark problems and instances. While this is a sensible way to assess the relative performance of solvers, such ranking does not necessarily reflect the full potential of a solver, especially when we want to utilise a portfolio of solvers instead of a single one for solving a new problem. In this paper, I will describe a portfolio-based analysis method which can give complementary insights into the performance of participating solvers in a competition. The method is demonstrated on the results of the MiniZinc Challenges and new insights gained from the portfolio viewpoint are presented.

11:25
Efficiently Explaining CSPs with Unsatisfiable Subset Optimization (extended abstract)
PRESENTER: Emilio Gamba

ABSTRACT. We build on a recently proposed method for step-wise explaining solutions of constraint satisfaction problems. An explanation here is a sequence of simple inference steps, where the simplicity of an inference step is measured by the number and types of constraints and facts used, and where the sequence explains all logical consequences of the problem. The explanation-generation algorithms presented in that work rely heavily on calls for Minimal Unsatisfiable Subsets (MUS) of a derived unsatisfiable formula, exploiting a one-to-one correspondence between so-called non-redundant explanations and MUSs. The generation of an explanation sequence for a logic grid puzzle ranges from a few minutes to a few hours, which is more than a human would take. Therefore, we build on these formal foundations and tackle the main points of improvement, namely how to efficiently generate explanations that are provably optimal (with respect to the given cost metric). To address this issue, we develop (1) an algorithm based on the hitting set duality for finding optimal constrained unsatisfiable subsets, where `constrainedness' reduces the multiple calls for (optimal) unsatisfiable subsets to a single call; (2) a method for re-using relevant information over multiple calls to these algorithms and; (3) methods exploiting domain-specific information to speed-up explanation sequence generation. We experimentally validate our algorithms on a large number of CSP instances. These experiments show that our algorithms outperform the MUS approach both in terms of quality of the explanation and in time to generate it. We show that this approach can be used to effectively find sequences of optimal explanation steps for a large set of constraint satisfaction problems.

11:50
Automatic Generation of Dominance Breaking Nogoods for Constraint Optimization
PRESENTER: Allen Z. Zhong

ABSTRACT. Dominance relations describe the relations among solutions of Constraint Optimization Problems (COPs), where some solutions are known to be subordinate compared with others concerning satisfiability of constraints and/or optimality of the objective. Dominance breaking is an effective technique to speed up Branch-and-Bound search for optimal solutions by exploiting dominance relations to prune away search space containing only dominated solutions. In this paper, we review recent progress of dominance breaking, and introduce a new line of work on automatic dominance breaking, a framework to automate the identification and exploitation of dominance relations in COPs thereby generating effective dominance breaking nogoods. The major idea is to generate (a) only nogoods as the building blocks to model arbitrary dominance breaking constraints and (b) only the nogoods that are efficiently and conveniently identifiable by constraint satisfaction. We also discuss possible directions of future work.

11:00-12:30 Session 10K (PCCR)
Location: Ullmann 203
11:00
Parameterized Approximations and CSPs

ABSTRACT. We survey parameterized approximation results for optimization versions of constraint satisfaction problems (Max-CSPs). This includes approximation schemes for such fundamental problems as Max-SAT that run in FPT time for various structural parameters of the instance incidence graph. We further give examples for the pivotal role of CSPs in proofs of parameterized inapproximability.

12:00
Kemeny Rank Aggregation: Width Measure and Diversity Notion

ABSTRACT. We study the Kemeny Rank Aggregation (KRA) problem, a well- studied problem lying in the intersection of order theory and social choice theory. Using a problem from order theory called Completion of an Ordering to model KRA, we give an FPT algorithm parameterized by the interval width of the unanimity order of the input votes. We also relate the interval width of the unanimity order to the maximum range parameter and improve on previous parameterized algorithms. Then, we combine the techniques developed for the FPT algorithm with the paradigm of diversity of solutions. This recent trend of research in artificial intelligence has focused on the development of notions of optimality that may be more appropriate in settings where subjectivity is essential. The idea is that instead of aiming at the development of algorithms that output a single optimal solution, the goal is to investigate algorithms that output a small set of sufficiently good solutions that are sufficiently diverse from one another. In this way, the user has the opportunity to choose the solution that is most appropriate to the context at hand. It also displays the richness of the solution space. In this settings, we show that the Kemeny Rank Aggregation problem is fixed-parameter tractable with respect to natural parameters providing natural formalizations of the notions of diversity and of the notion of a sufficiently good solution. Our main results work both when considering the traditional setting of ag- gregation over linearly ordered votes, and in the more general setting where votes are partially ordered. This work is based on join works with Henning Fernau, Daniel Lokshtanov, Mateus de Oliveira Oliveira, and Petra Wolf [1, 2].

11:00-11:30 Session 10L (VardiFest)
Location: Taub 1
11:00
Formal Aspects of Strategic Reasoning in MAS

ABSTRACT. In formal system design, model checking is a well-established method to automatically check for global correctness of systems. In such a framework, in order to verify whether a system is correct with respect to a desired property, we describe its structure with a mathematical model, specify the property with a temporal logic formula, and check formally that the model satisfies the specification. This method has been first conceived for closed systems whose behavior is completely determined by their internal states and transitions.

In late 90s, interest has arisen in analyzing the behavior of individual components (or sets of components) in systems with multiple entities. The interest began in the field of reactive systems, which are systems that interact continually with their environments and whose behaviour depends on this interaction. Successively, researchers have looked for logics to reason about, and verify strategic behavior of agents in multi-agent systems.

The aim of this contribution is to report on the main developments made in the last 20 years in formal verification of reactive and multi-agent systems. In particular, we will report on CTL* module checking, ATL* model checking, and Strategy Logic.

11:10
Checking Legal Contracts - On a Not So Usual Application of Mechanized Logic
PRESENTER: Stefan Leue

ABSTRACT. The codication of legal artefacts in formal logic has a long tradition, dating back, among others, to seminal works be G. W. Leibniz. More recent works include the use of epistemic and deontic logics in the formalization of laws and contracts. While many of the existing approaches focus on issues of expressiveness, adequateness and non-automated reasoning with respect to the considered logics, our interest lies in a Computer Science and System Analysis perspective. In particular, we aim at developing tools that build on mechanized logics and perform automated consistency analysis for legal contract documents. Inconsistencies in legal contracts can be static, which means that they include contradictory textual information, or they can be dynamic, leading to (a potential) non-executability of the contract during its dynamic execution. We will present a method and tool called Con- tractCheck [1], which performs static and dynamic logical consistency analyses of Share Purchase Agreements (SPAs) based on the SMT solver Z3. We focus on its application to SPAs in the acquisition of corporations, since these are typically shielded from many of the implicit legal background facts implied by legal dogmatics. Starting from the denition of an ontology for SPAs, we provide structured text blocks in ContractCheck from which the contract can be composed. ContractCheck encodes the relevant legal facts from each of the blocks of an SPA in a logic encompassing decidable fragments of rst-order logic (linear real arithmetic, uninterpreted functions). The logical model, together with the applicable logical consistency constraints, are handed over to the SMT solver, and the analysis results are then depicted by ContractCheck. Using an illustrative example we will exemplify the capabilities of ContractCheck. In addition to the computation of inconsistencies, the model construction capabilities of the SMT solver synthesize dynamic contract executions consistent with the contractual constraints. The MaxSMT capabilities even permit to compute preferable orders of claim executions. We view our work as an illustration of how far mechanized logic can take us in understanding and improving even the non-technical facets of the world that we live in. [1] Alan Khoja, Martin Kolbl, Stefan Leue, and Rudiger Wilhelmi. Automated consistency analysisfor legal contracts. In Proc. of the SPIN Symposium on Model Checking Software, SPIN 2022, LNCS. Springer Verlag, 2022. To appear.

11:20
Modelling with Reconfigurable Communication Interfaces

ABSTRACT. We survey our results on a formalism to model and reason about systems with reconfigurable communication interfaces. Processes in our formalism are independent state machines communicating by exchanging messages on broadcast and multicast channels. Messages can include additional data values allowing for further design flexibility. Broadcasts can be customized to target only recipients satisfying certain conditions on their states. Processes can connect to and disconnect from multicast channels and messages can go through only once a condition on the state of all recipients (those listening to the channel) is satisfied. Dependence on the local state of senders and receivers implies that interaction interfaces are reconfigured through execution. Reconfiguration occurs both at message level and channel level.

We show that such systems can be represented and reasoned about symbolically by including existential quantification in the definition of the transition relation. The quantification allows to apply conditions imposed by senders in a way that depends on the local state of recipients. While this symbolic representation supports a global semantics and efficient analysis, it is not compositional. We match the symbolic representation with a compositional semantics through Channeled Transition Systems. When coming to consider the partial order semantics of such reconfigurable systems, we identify that additional information about interleaving needs to be included in the partial order.

On the reasoning side, we consider an extension of LTL that allows to reason about the contents of messages and the intentions of senders when setting conditions on receivers. We support modelling and analysis through model checking of a simple programming language tailored for the production of such processes.

11:00-12:30 Session 10M: Talks (1) (WPTE)
Chair:
Location: Ullmann 305
11:00
Program Equivalence in a Typed Probabilistic Call-by-Need Functional Language
PRESENTER: David Sabel

ABSTRACT. We introduce a call-by-need variant of PCF with a binary probabilistic fair choice operator. We define its operational semantics and contextual equivalence as program equivalence, where the expected convergence is only observed on numbers. We investigate another program equivalence that views two closed expressions as distribution-equivalent if evaluation generates the same distribution on the natural numbers. We show that contextual equivalence implies distribution-equivalence. Future work is to provide a full proof of the opposite direction.

11:30
A Machine-Checked Formalisation of Erlang Modules
PRESENTER: Péter Bereczky

ABSTRACT. This paper contributes to our long-term goal of defining the Erlang programming language in the Coq proof assistant, ultimately allowing us to formally reason about Erlang programs and their transformations. In previous work, we have described a sequential subset of the core language of Erlang in different semantics definition styles, which is extended with the module system in the present formalization. In particular, we define modules and inter-module calls in natural and functional big-step semantics, and we prove their equivalence. Last but not least, we validate the formal definition with respect to the reference implementation. The formal theories are machine-checked in Coq.

12:00
Extending a Lemma Generation Approach for Rewriting Induction on Logically Constrained Term Rewriting Systems
PRESENTER: Kasper Hagens

ABSTRACT. When transforming a program into a more efficient version (for example translating a recursive program into an imperative program), it is important to guarantee that the input-output-behavior remains the same. One approach to assure this uses so-called Logically Constrained Term Rewriting Systems (LCTRSs). Two versions of a program are translated into LCTRSs and compared in a proof system (Rewriting Induction). Proving their equivalence in this system often requires the introduction of a lemma. In this paper we review a lemma generation approach by Fuhs, Kop and Nishida and propose two possible extensions.

11:00-12:30 Session 10N (WiL)
Location: Ullmann 307
11:00
Monoidal Width -- Extended Abstract
PRESENTER: Elena Di Lavore

ABSTRACT. We introduce monoidal width as a measure of the difficulty of decomposing morphisms in monoidal categories. By instantiating monoidal width and two variations in a suitable category of cospans of graphs, we capture existing notions, namely branch width, tree width and path width. By changing the category of graphs, we are also able to capture rank width. Through these examples, we propose that monoidal width: (i) is a promising concept that, while capturing known measures, can similarly be instantiated in other settings, avoiding the need for ad-hoc domain-specific definitions and (ii) comes with a general, formal algebraic notion of decomposition using the language of monoidal categories.

11:20
Hypertrace Logic

ABSTRACT. Security policies are often specified as hyperproperties. The logical specification of hyperproperties requires quantification over both time and traces. The interaction between trace and time quantifiers makes it challenging to understand what is being specified by a hyperlogic formula. For example, starting from the high-level two-state requirement that information from x should not flow to z until information in y does not flow to z, eight different specification variants were identified in a recent work. A two-sorted first-order logic, called Hypertrace logic, was introduced to provide mathematical definitions for the variants identified. In this talk, I introduce Hypertrace logic with some of its fragments. My goal is to present hyperproperties through the lenses of a first-order formalism and discuss possible new research directions. For example, how to use techniques from modal logic to identify decidable fragments in Hypertrace logic.

11:40
A goal-oriented proof system for epistemic modal logic

ABSTRACT. Epistemic logic enables us to have a logical approach to information, particularly the type of information that is meaningful for an agent as a whole: their knowledge. It compasses a family of well-developed and well-understood systems. However, its proof-theoretic presentation usually is axiomatic. This approach creates obstacles when we try to integrate it into rich theories of human reasoning and automated reasoning systems. This talk develops new proof systems for Epistemic Logic based on the goal-oriented systems of Gabbay & Olivetti (2000). In a goal-oriented proof system with epistemic modality, a deductive process will begin by wondering if an agent knows a particular objective in a particular state, depending on a particular database and the states accessible for that agent. The general approach of this system is based on databases and objectives. By integrating Epistemic Logic with a goal-based proof, we can allow for new human and automated reasoning applications.

12:00
A Mixed Linear and Graded Logic-Extended Abstract
PRESENTER: Victoria Vollmer

ABSTRACT. Benton showed in his seminal paper on Linear/Non-Linear logic and models that the ofcourse modality of linear logic can be split into two modalities connecting intuitionistic logic with linear logic forming a symmetric monoidal adjunction. In this paper, we give a similar result showing that graded modalities can be split into two modalities connecting graded logic with linear logic. We propose a sequent calculus, its proof theory and categorical model, and a natural deduction system in the form of a term assignment system which is shown to be isomorphic to the sequent calculus.

11:00-12:30 Session 10P (XLoKR)

XLoKR Session 2

Location: Ullmann 202
11:00
Explaining Description Logic Entailments in Practice with Evee and Evonne
PRESENTER: Stefan Borgwardt

ABSTRACT. When working with description logic ontologies, understanding entailments derived by a description logic reasoner is not always straightforward. So far, the standard ontology editor Protégé offers two services to help: (black-box) justifications for OWL 2 DL ontologies, and (glass-box) proofs for lightweight OWL EL ontologies, where the latter exploits the proof facilities of reasoner Elk. Since justifications are often insufficient for explaining inferences, there is thus only little tool support for explaining inferences in more expressive DLs. To solve these issues, we present the two sisters Evee and Evonne that provide proof-based explanations for more expressive DLs. Evee consists of a Java library for generating proofs (Evee-libs), and a plugin for Protégé to show these proofs (Evee-protege). Evonne is a more avdanced proof visualisation tool that uses Evee-libs.

11:20
Finding Good Proofs for Answers to Conjunctive Queries Mediated by Lightweight Ontologies (Extended Abstract)
PRESENTER: Stefan Borgwardt

ABSTRACT. In ontology-mediated query answering, access to incomplete data sources is mediated by a conceptual layer constituted by an ontology. To correctly compute answers to queries, it is necessary to perform complex reasoning over the constraints expressed by the ontology. In the literature, there exists a multitude of techniques incorporating the ontological knowledge into queries. However, few of these approaches were designed for comprehensibility of the query answers. In this article, we try to bridge these two qualities by adapting a proof framework originally applied to axiom entailment for conjunctive query answering. We investigate the data and combined complexity of determining the existence of a proof below a given quality threshold, which can be measured in different ways. By distinguishing various parameters such as the shape of a query, we obtain an overview of the complexity of this problem for the lightweight ontology languages DL-Lite_R and EL, and also have a brief look at temporal query answering.

This is an extended abstract of a paper accepted at the 35th International Workshop on Description Logics.

11:40
An API for DL Abduction Solvers

ABSTRACT. We propose a unified API for integration of different DL abduction solvers applications, similarly as OWL API does this for deductive OWL/DL reasoners.

12:00
Modular Provenance in Multi-Context Systems
PRESENTER: Matthias Knorr

ABSTRACT. A rapidly increasing amount of data, information and knowledge is becoming available on the Web, often written in different formats and languages, adhering to standardizations driven by the World Wide Web Consortium initiative. Taking advantage of all this heterogeneous knowledge requires its integration for more sophisticated reasoning services and applications. To fully leverage the potential of such systems, their inferences should be accompanied with justifications which allow a user to understand a proposed decision/recommendation, in particular for critical systems (healthcare, law, finances, etc.). However, determining such justifications has commonly only been considered for a single formalism, such as relational databases, description logic ontologies, or declarative rule languages. In this paper, we give an overview on the first approach for providing provenance for heterogeneous knowledge bases building on the general framework of multi-context systems, as an abstract, but very expressive formalism to represent knowledge bases written in different formalisms and the flow of information between them.

11:35-12:05 Session 11 (VardiFest)
Location: Taub 1
11:35
On the Extraordinary Effectiveness of Logic in Strategic Reasoning

ABSTRACT. I will talk about Strategy Logic: a powerful language for strategic reasoning that Moshe Vardi introduced a decade ago and quickly became widely applied in Formal Methods and AI. I will do it with a special focus on its expressiveness in Multi-Agent Systems and Game Theory, as well as for solving Rational Synthesis problems.

11:45
Automated Synthesis of Mechanisms

ABSTRACT. Mechanism Design aims to design a game so that a desirable outcome is reached regardless of agents’ self-interests. In this paper, we show how this problem can be rephrased as a synthesis problem, where mechanisms are automatically synthesized from a partial or complete specification in a high-level logical language. We show that Quantitative StrategyLogic is a perfect candidate for specifying mechanisms as it can express complex strategic and quantitative properties. We solve automated mechanism design in two cases: when the number of actions is bounded, and when agents play in turn.

11:55
Synthesis of plans for teams of manufacturing robots

ABSTRACT. Logical, or formal, specifications have proved extremely useful for automated synthesis of plans, strategies, and other objects such as norms of behaviour in software systems. In addition to purely software systems, synthesis of plans conforming to logical specification has been shown to work for cyber-physical systems, in particular, for manufacturing systems. Among many synthesis problems Moshe Vardi worked on is synthesis for manufacturing. The problem in manufacturing synthesis is as follows: given a production recipe, that is, a description of which operations need to be performed in order to have different versions of a product to be manufactured, and a description of a manufacturing facility, synthesise a controller specific to this manufacturing facility, where each operation is assigned to a specific set of manufacturing robots [3]. In [2, 1], synthesis for manufacturing has been investigated, where both the production recipes and manufacturing facilities are modelled as transducers (Mealy machines). It was shown that this version of the synthesis for manufacturing problem is decidable in 2EXPTime when the set of manufacturing resources is fixed, and is undecidable if the problem requires to find how many copies of each manufacturing resource are needed to manufacture the product. In this talk, I would like to outline possible extensions of the problem while keeping it decidable, for example using first order specifications of the production recipes as in [4] and specifying additional constraints on how the production facilities should be used.

12:10-12:40 Session 12A (DECFOML)
Location: Ullmann 104
12:10
The Quantified Reflection Calculus as a Modal Logic

ABSTRACT. The Quantified Reflection Calculus with N modalities, or QRC_N, is a quantified and strictly positive modal logic inspired by the provability reading of the modal symbols. The non-modal language consists of a verum constant and relational symbols as atomic formulas, conjunctions, and universal quantifiers. The available modal connectives are labeled diamonds, one for each natural number n < N. QRC_N statements are assertions of the form A \vdash B, with A and B in this strictly positive language. We describe the axiomatic system for QRC_N and see that it is sound with respect to constant domain Kripke models. Then we focus on the QRC_1 fragment, which is complete with respect to finite and constant domain Kripke models, and thus decidable. The completeness and decidability of the general case with N modalities is still an open problem. (Joint work with Joost J. Joosten)

12:10-12:40 Session 12B (VardiFest)
Location: Taub 1
12:10
That's All I know: On the Effectiveness of Logic in Game Theory

ABSTRACT. We give a logical epistemic characterization of iterated admissibility (i.e., iterated deletion of weakly dominated strategies), based on the intuition that a strategy is admissible if it is one used by an agent who is rational, knows that other agents satisfy the appropriate rationality requirements, and that is "all the agent knows'', in that she considers possible all strategies compatible with these assumptions.

12:20
Between Determinism and Nondeterminism

ABSTRACT. In automata-theoretic verification, fair simulation is a sufficient condition for trace inclusion. The two notions coincide if the specification automaton is history-deterministic (also called "good-for-games"). History-determinism is nondeterminism that can be resolved by remembering the past, without need for guessing the future. History-determinism is a natural automaton- and game-theoretic concept that can replace determinism for many verification and synthesis questions.

12:30
A Short Talk Proposal for the VardiFest "On the Not So Unusual Effectiveness of Logic"

ABSTRACT. This is a talk proposal for the VardiFest where I want to recount my one and half years of learning directly from Moshe and having worked with him on a paper. This talk will be more of ruminations on that wonderful experience (1995-1996) and a homage to the wonderous computer scientist that I get inspired by even today.

12:30-14:00Lunch Break

Lunches will be held in Taub hall and in The Grand Water Research Institute.

12:30-12:40 Session 13: Spotlight talks (LMML)
Location: Ullmann 101
12:30
MCP: Capturing Big Data by Satisfiability
PRESENTER: Miki Hermann

ABSTRACT. Experimental data is often given as bit vectors, with vectors corresponding to observations, and coordinates to attributes, with a bit being true if the corresponding attribute was observed. Observations are usually grouped, e.g.\ into positive and negative samples. Among the essential tasks on such data, we have compression, the construction of classifiers for assigning new data, and information extraction.

Our system, MCP, approaches these tasks by propositional logic. For each group of observations, MCP constructs a (usually small) conjunctive formula that is true for the observations of the group, and false for the others. Depending on the settings, the formula consists of Horn, dual-Horn, bijunctive or general clauses. To reduce its size, only relevant subsets of the attributes are considered. The formula is a (lossy) representation of the original data and generalizes the observations, as it is usually satisfied by more bit vectors than just the observations. It thus may serve as a classifier for new data. Moreover, (dual-)Horn clauses, when read as if-then rules, make dependencies between attributes explicit. They can be regarded as an explanation for classification decisions.

14:00-15:00 Session 14A (ASL)

Keynote Talk 2

Location: (virtual only)
14:00
Functional correctness specifications for concurrent data structures: Logical Atomicity in Iris

ABSTRACT. Concurrent separation logic (CSL) has demonstrated that separation logic is exceptionally well-suited for reasoning about concurrent programs. However, CSL on its own is not able to express and exploit the desired notion of functional correctness for concurrent data structures: linearizability. While CSL is regularly employed to *prove* linearizability, linearizability is an extra-logical notion: when working inside the separation logic, there is no way to make *use* of the fact that some data structure is linearizable. Therefore, the TaDA paper in 2014 proposed a new style of specifying concurrent data structures: *logically atomic Hoare triples*. These triples provide a simple way to turn a specification of a sequential data structure (say, a stack) into a concurrent variant of the same specification, stipulating that each operation must "behave atomically". This enables clients of logically atomic triples to make use of the "invariant rule", a key proof rule in CSL to reason about an atomic step of execution. In this talk, I will explain how logically atomic triples are used in Iris to specify and verify concurrent data structures as well as clients of these data structures. I will also show how logically atomic triples are defined inside the Iris program logic by composing lower-level logical primitives.

14:00-15:30 Session 14B: Counting/Probabilistic ASP & Synthesis (ASPOCP)

Counting/Probabilistic ASP & Synthesis

Location: Ullmann 311
14:00
A Semantics For Probabilistic Answer Set Programs With Incomplete Stochastic Knowledge
PRESENTER: David Tuckey

ABSTRACT. Some probabilistic answer set programs (PASP) semantics assign probabilities to sets of answer sets and implicitly assumes these answer sets to be equiprobable. While this is a common choice in probability theory, it leads to unnatural behaviours with PASPs. We argue that the user should have a level of control over what assumption is used to obtain a probability distribution when the stochastic knowledge is incomplete. To this end, we introduce the Incomplete Knowledge Semantics (IKS) for probabilistic answer set programming. We take inspiration from the field of decision making under ignorance. Given a cost function, which is given by a user-defined ordering over answer sets through weak constraints, we use the notion of Ordered Weighted Averaging (OWA) operator, to distribute the probability over a set of answer sets accordingly to the user’s level of optimism. The more optimistic (or pessimistic) a user is, the more (or less) probability is assigned to the more (or less) optimal answer sets. We present an implementation and showcase the behaviour of this semantics on simple examples. We also highlight the impact that different OWA operators have on weight learning, showing that the equiprobability assumption isn't always the best option.

14:20
plingo: A system for probabilistic reasoning in clingo based on lpmln
PRESENTER: Nicolas Ruehling

ABSTRACT. We present plingo, an extension of the ASP system clingo with various probabilistic reasoning modes. Plingo is centered upon LPMLN, a probabilistic extension of ASP based upon a weight scheme from Markov Logic. This choice is motivated by the fact that the core probabilistic reasoning modes can be mapped onto optimization problems and that LPMLN may serve as a middle-ground formalism connecting to other probabilistic approaches. As a result, plingo offers three alternative front-ends, one for LPMLN, PLOG, and PROBLOG. The corresponding input languages and reasoning modes are implemented by means of clingo's multi-shot and theory solving capabilities. Although plingo's core amounts to a re-implementation of LPMLN in terms of modern ASP technology, it integrates various probabilistic reasoning modes with the full modeling language and reasoning spectrum of clingo.

14:40
Automatic Synthesis of Boolean Networks from Biological Knowledge and Data

ABSTRACT. Boolean Networks (BNs) are a simple formalism used to study complex biological systems when the prediction of exact reaction times is not of interest. They play a key role to understand the dynamics of the studied systems and to predict their disruption in case of complex human diseases. BNs are generally built from experimental data and knowledge from the literature, either manually or with the aid of programs. The automatic synthesis of BNs is still a challenge for which several approaches have been proposed. In this paper, we propose ASKeD-BN, a new approach based on Answer-Set Programming to synthesise BNs constrained in their structure and dynamics. By applying our method on several well-known biological systems, we provide empirical evidence that our approach can construct BNs in line with the provided constraints. We compare our approach with three existing methods (REVEAL, Best-Fit and caspo-TS) and show that our approach synthesises a small number of BNs which are covering a good proportion of the dynamical constraints, and that the variance of this coverage is low. ------- This work was published at the conference OLA2021. https://www.springerprofessional.de/en/automatic-synthesis-of-boolean-networks-from-biological-knowledg/19574476

15:00
ASP in Industry, here and there

ABSTRACT. Answer Set Programming (ASP) has become a popular paradigm for declarativeproblem solving and is about to find its way into industry.  This is dueto its expressive yet easy knowledge representation language powered by highlyperformant (Boolean) solving technology.  As with many other such paradigmsbefore, the transition from academia to industry calls for more versatility.Hence, many real-world applications are not tackled by pure ASP but ratherhybrid ASP.  The corresponding ASP systems are usually augmented by foreignlanguage constructs from which additional inferences can be drawn.  Examplesinclude linear equations or temporal formulas.  For the design of "sound"systems, however, it is indispensable to provide semantic underpinningsright from the start.  To this end, we will discuss the vital role of ASP'slogical foundations, the logic of Here-and-There and its non-monotonic extension,Equilibrium Logic, in designing hybrid ASP systems and highlight some of theresulting industrial applications.

14:00-15:30 Session 14C (CHANGE)

This is a KR-2022 tutorial on Sunday July 31st: two 1.5h sessions in afternoon with a break: 14:00-15:30 and 16:00-17:30. See details at  https://www.cs.ryerson.ca/~mes/HTSC-Tutorial-KR2022/

Location: Ullmann 200
14:00
Tutorial: Hybrid Temporal Situation Calculus for Planning with Continuous Processes: Semantics

ABSTRACT. This tutorial focuses on the representation and semantics of change. The basic language is the situation calculus. This tutorial covers both the well-known logical foundations and the recent developments in the situation calculus. Specifically, the initial part of the tutorial includes logical characterizations of qualitative change within the situation calculus, and the middle part presents a recent approach that extends the previous version of the situation calculus with new representations for reasoning about quantitative change over time. It turns out that this new version of temporal situation calculus can serve as a basis for the development of a new lifted heuristic planner that can solve the planning problems in the mixed discrete-continuous domains. It is ``lifted" in a sense that it works with action schemata, but not with instantiated actions. These hybrid domains include not only the usual discrete actions and qualitative properties, but also continuous processes that can be possibly initiated or terminated by actions. The final part of the tutorial includes an overview of the experimental results collected from running a tentative implementation of a new planner on a selected benchmarks designed by the planning community for the hybrid domains. Since this tutorial focuses on logical representations and semantics, there will be no discussion of the algorithmic or implementation details related to planning. No previous background in the situation calculus or planning is required. However, it is expected that the attendees have basic understanding of knowledge representation and working knowledge of classical first order logic.

See details at  https://www.cs.ryerson.ca/~mes/HTSC-Tutorial-KR2022/

14:00-14:30 Session 14D (DECFOML)
Location: Ullmann 104
14:00
Forward Guarded Fragment: A tamed higher-arity extension of ALC

ABSTRACT. During the talk I will present a novel logic called the forward guarded fragment (FGF) that combines ideas of forward and guarded quantification. The presented logic extends the Description Logic ALC with higher-arity relations in a tamed way: both the knowledge base satisfiability problem and conjunctive query entailment problem are ExpTime-complete, thus having the same complexity as plain ALC. We provide a few intuitions behind these results and discuss an ongoing work towards model-theory and extensions of FGF.

14:00-15:00 Session 14E: Teaching proofs (FOMEO)

4 talks, each roughly 10 minutes plus 5 minutes for discussion and questions.

We kindly ask presenters of this session to be in the gather after 15:00 so that remote participants can ask questions and discuss with them.

Location: Ullmann 205
14:00
LogInd, a tool for supporting students in mastering structural induction
PRESENTER: Josje Lodder

ABSTRACT. We present LogInd, a tool for teaching structural induction. LogInd supports students in providing inductive proofs of properties of propositional formulae. Students can enter their proofs stepwise and receive feedback on each step. At any moment they can ask for a hint or next step.

14:15
Prooffold: a prototype for displaying structured proofs

ABSTRACT. Prooffold is a tool for displaying structured proofs, in the same spirit than Leslie Lamport's proposition. Prooffold displays a proof in a top-bottom manner: from the general idea of the proof to the technical details. The graphical user interface proposes panels that the user can open/close to see/hide more details. Proofs in Prooffold are written in a dialect of Markdown that offers structural constructions.

14:30
Formal Methods Online: Sequent Calculus Verifier (SeCaV)

ABSTRACT. We present the Sequent Calculus Verifier (SeCaV) and discuss its advantages and disadvantages as a tool for teaching logic, automated reasoning and formal methods. SeCaV is a formalization in the proof assistant Isabelle/HOL of classical first-order logic with constants and functions. The syntax, the semantics and the inductive definition of the sequent calculus proof system along with the soundness and completeness proofs are verified in Isabelle/HOL. SeCaV is accompanied by the SeCaV Unshortener, which is a browser application for developing sequent calculus proofs that are automatically translated into the corresponding Isabelle-embedded proofs. A compact format for the one-sided sequent calculus is used. The system provides feedback on proof rule applications. Online help and examples are available. Hundreds of computer science students have used SeCaV for course exercises and exams. Reference: https://secav.compute.dtu.dk/

14:45
SIVA: Simulation and visualization of the DL tableau algorithm

ABSTRACT. SIVA is a web application serving for simulation of the tableau algorithm (TA) for the ALC DL. A user builds a completion tree (to prove the input consistency) step by step, supervised by SIVA. The main goal of SIVA is to teach the DL tableau expansion rules and to visualize the completion tree. The visualization with the tool is simpler and more readable than on a blackboard. One of the main advantages is that beside the completion tree also the tableau is visualized.

14:00-15:30 Session 14F (FoMLAS)

FoMLAS Session 3

Chair:
Location: Ullmann 201
14:00
VNN-COMP 2022
PRESENTER: Mark Müller

ABSTRACT. Over the last years, the formal verification of safety properties of neural networks has received ever-increasing attention from the research community leading to a vast range of applicable tools.

To help practitioners decide which verifier to apply, we hosted the ‘Verification of Neural Networks Competition’ (VNN-COMP’22), facilitating the comparison of current state-of-the-art systems on a range of diverse benchmarks using (cost) equivalent hardware. In this talk, we will present an analysis of the results and have successful participants present the core ideas underlying their methods. Furthermore, we will gather feedback on this year's processes and discuss ideas for future iterations of the competition.

This talk will be delivered jointly by Christopher Brix and Mark Müller.

14:00-15:30 Session 14G (HoTT/UF)
Location: Ullmann 303
14:00
Towards computing cohomology of dependent abelian groups
14:30
A Library of Monoidal Categories for Display and Univalence
PRESENTER: Kobe Wullaert
15:00
The internal AB axioms
14:00-15:30 Session 14H: computation of explanations for black-box ML models (LMML)
Location: Ullmann 101
14:00
A Rate-Distortion Framework for Explaining Model Decisions and Application to CartoonX

ABSTRACT. I will present the Rate-Distortion Explanation (RDE) framework, a mathematically well-founded method for explaining black-box model decisions. The method extracts the relevant part of the signal that led the model to make its decision. As an application, I will present CartoonX, an RDE method based on wavelet representations for explaining image classification. Natural images can be well approximated by piecewise smooth signals — also called cartoon images — and tend to be sparse in the wavelet domain. Motivated by this, CartoonX requires its explanations to be sparse in the wavelet domain, thus extracting the relevant piecewise smooth part of an image. In practice, CartoonX provides not only highly interpretable explanations but also turned out to be particularly apt at explaining misclassifications.

14:45
The Exciting Theory of Formal Explanations

ABSTRACT. Explaining the decisions made by machine learning models is a fundamental challenge given their involvement in almost all areas of our lives, especially those where the stakes are high, such as healthcare, judicial systems, or financial institutions. However, explanations can be misleading or confusing when they don't have a clear interpretation (an explanation without a clear interpretation might need itself to be explained!). The area of Formal eXplainable AI (XAI) studies explanations under a formal theoretical lens, where explanations have precise definitions and thus induce concrete computational problems. In this talk, I will summarize theoretical results about the computational complexity of formal explanations, as well as directions for obtaining them in practice through formal methods.

14:00-15:30 Session 14I (LogTeach)
Location: Ullmann 301
14:00
Undergraduate Logic Teaching in Computing: Why, What, How?

ABSTRACT. Logic lies at the heart of computing, but its presence is somewhat dimming. This is particularly striking at the undergraduate level, where far too little logic training is done. This paper examines this regrettable state of affairs, considering what is actually taught, what should be taught, and how logic must be taught at computing undergraduates.

15:00
Introducing Logic by Stealth to Computer Science Students
PRESENTER: Liam O'Reilly

ABSTRACT. Logic is an inevitable part of any undergraduate computer science degree programme. However, today's computer science student typically finds this to be at best a necessary evil with which they struggle to engage. Twenty years ago, we started to address this issue seriously in our university, and we have instituted a number of innovations throughout the years which have had a positive effect on engagement and, thus, attainment.

14:00-14:40 Session 14J: Invited talk -- dr. Nguyen Dang (ModRef)
Location: Ullmann 310
14:00
A Constraint-Based Tool for Generating Benchmark Instances

ABSTRACT. Benchmarking is fundamental for assessing the relative performance of alternative solving approaches. For an informative benchmark we often need a sufficient quantity of instances with different levels of difficulty and the ability to explore subsets of the instance space to detect performance discrimination among solvers. In this talk, I will present AutoIG, an automated tool for generating benchmark instances for constraint solvers. AutoIG supports generating two types of instances: graded instances (i.e., solvable at a certain difficult level by a solver), and discriminating instances (i.e., favouring a solver over another). The usefulness of the tool in benchmarking is demonstrated via an application on five problems taken from the MiniZinc Challenges. Our experiments show that the large number of instances found by AutoIG can provide more detailed insights into the performance of the solvers rather than just a ranking. Cases where a solver is weak or even faulty can be detected, providing valuable information to solver developers. Moreover, discriminating instances can reveal parts of the instance space where a generally weak solver actually performs well relative to others, and therefore could be useful as part of an algorithm portfolio.

14:00-15:30 Session 14K (PC)
Location: Ullmann 309
14:00
Towards higher order proof complexity
PRESENTER: Anupam Das

ABSTRACT. A nonuniform version of elementary computation is naturally given by 'higher-order' Boolean logic, an extension of propositional logic by abstraction and application operations at all finite types. The corresponding Frege-style proof system is nothing more than Church's simple type theory (STT), restricted to Boolean ground type. Our starting point is the observation that this system in fact bears correspondence with the well-known arithmetic theory IDelta0 + exp, a well-known theory of elementary computation¸ in the usual sense of proof complexity.

In this work-in-progress, we explore the gap between traditional bounded arithmetic theories in proof complexity (typically below exponential-time) and weak theories of arithmetic in proof theory (typically above elementary computation). The main idea is to formulate higher-order versions of Buss' classical theories that climb up the elementary hierarchy at the same time as analogous restrictions on cuts in STT. The goal is to establish modular characterisations for stages of the elementary hierarchy, and also intermediate alternating-time-hierarchies by way of (higher-type) quantifier complexity.

14:30
Proof complexity of CSP

ABSTRACT. The CSP (constraint satisfaction problems) is a class of problems deciding whether there exists a homomorphism from an instance relational structure to a target one. The CSP dichotomy is a profound result recently proved by Zhuk and Bulatov. It says that for each fixed target structure CSP is either NP-complete or p-time. Zhuk's algorithm solves CSP in polynomial time for constraint languages having a weak near unanimity polymorphism.

For negative instances of p-time CSP it is reasonable to explore their proof complexity. We show that the soundness of Zhuk's algorithm can be proved in the theory of bounded arithmetic, namely in theory V^1 augmented by three special universal algebra axioms.

15:00
Proof complexity of natural formulas via communication arguments
PRESENTER: Dmitry Itsykson

ABSTRACT. A canonical communication problem $\mathrm{Search}{\phi}$ is defined for every unsatisfiable CNF $\phi$: an assignment to the variables of $\phi$ is distributed among the communicating parties, they are to find a clause of $\phi$ falsified by this assignment. Lower bounds on the randomized $k$-party communication complexity of $\mathrm{Search}{\phi}$ in the number-on-forehead (NOF) model imply tree-size lower bounds, rank lower bounds, and size-space tradeoffs for the formula $\phi$ in the semantic proof system $\mathrm{T}^{cc}(k,c)$ that operates with proof lines that can be computed by $k$-party randomized communication protocol using at most $c$ bits of communication [Göös, Pitassi, 2014]. All known lower bounds on $\mathrm{Search}{\phi}$ (e.g. [Beame, Pitassi, Segerlind, 2007]; [Göös, Pitassi, 2014], [Impagliazzo, Pitassi, Urquhart, 1994]) are realized on ad-hoc formulas $\phi$ (i.e. they were introduced specifically for these lower bounds). We introduce a new communication complexity approach that allows establishing proof complexity lower bounds for natural formulas.

First, we demonstrate our approach for two-party communication and apply it to the proof system $\mathrm{Res}(\oplus)$ that operates with disjunctions of linear equalities over $\mathbb{F}_2$ [Itsykson, Sokolov, 2014]. Let a formula $\mathrm{PM}_G$ encode that a graph $G$ has a perfect matching. If $G$ has an odd number of vertices, then $\mathrm{PM}_{G}$ has a tree-like $\mathrm{Res} oplus)$-refutation of a polynomial-size [Itsykson, Sokolov, 2014]. It was unknown whether this is the case for graphs with an even number of vertices. Using our approach we resolve this question and show a lower bound $2^{\Omega(n)}$ on size of tree-like $\mathrm{Res}(\oplus)$-refutations of $\mathrm{PM}_{K_{n+2,n}}$.

Then we apply our approach for $k$-party communication complexity in the NOF model and obtain a $\Omega\left(\frac{1}{k} 2^{n/2k - 3k/2}\right)$ lower bound on the randomized $k$-party communication complexity of $\mathrm{Search} mathrm{BPHP}^{M}_{2^n}}$ w.r.t. to some natural partition of the variables, where $\mathrm{BPHP}^{M}_{2^n}$ is the bit pigeonhole principle and $M=2^n+2^{n(1-1/k)}$. In particular, our result implies that the bit pigeonhole requires exponential tree-like $\mathrm{Th}(k)$ proofs, where $\mathrm{Th}(k)$ is the semantic proof system operating with polynomial inequalities of degree at most $k$ and $k= \O(\log^{1-\epsilon} n)$ for some $\epsilon > 0$. We also show that $\mathrm{BPHP}^{2^n+1}_{2^n}$ superpolynomially separates tree-like $\mathrm{Th}(\log^{1-\epsilon} m)$ from tree-like $\mathrm{Th}(\log m)$, where $m$ is the number of variables in the refuted formula.

The talk is based on the paper published in Proceedings of CCC-2021.

14:00-15:30 Session 14L (PCCR)
Location: Ullmann 203
14:00
Resolving Inconsistency of Linear Equations

ABSTRACT. Checking whether a system of linear equations is consistent is a basic computational problem with ubiquitous applications. When dealing with inconsistent systems, one may seek an assignment that minimizes the number of unsatisfied equations. This problem is NP-hard and UGC-hard to approximate within any constant even for two-variable equations over the two-element field. We study this problem from the point of view of parameterized complexity, with the parameter being the number of unsatisfied equations. The equations are defined over Euclidean domains -- an abstract algebraic structure generalizing finite and infinite fields including the rationals, the ring of integers, and many other structures. We find that the problem is W[1]-hard when three or more variables are allowed in an equation. The remaining case with two-variable equations is more interesting since it generalizes many eminent graph separation problems like Bipartization, Multiway Cut and Multicut parameterized by the solution size. We provide an fixed-parameter tractable algorithm for this case. On the technical side, we introduce the notion of important balanced subgraphs and an fpt algorithm for their enumeration. The method is used as a more efficient alternative to the sampling of important separators [Marx and Razgon, SICOMP'14]. Further, we use recent results on parameterized MinCSP [Kim et al., SODA'21] to solve a generalization of Multicut with disjunctive cut requests.

The talk is based on joint (unpublished) work with Konrad K. Dabrowski, Peter Jonsson, Sebastian Ordyniak, and Magnus Wahlström.

14:30
Towards Tractable QSAT for Structural Parameters Below Treewidth

ABSTRACT. Quantified Boolean Formulas (QBFs) are propositional formulas with universal and existential quantification over truth values. The satisfiability problem of QBFs (QSAT) is PSPACE-complete and lends itself to natural encodings for a range of problems in AI and formal verification.QSAT is well known to be FPT parameterized by the number of quantifier alternations and treewidth of the input formula, but PSPACE-hard parameterized by treewidth alone.This matches an intuition that decomposition techniques for SAT can typically be adapted when quantifiers in the prefix are ordered the right way, but break down once the ordering is permuted.In hopes of obtaining new insights into the compositional structure of QSAT, we consider more restrictive parameters than treewidth, and present preliminary results.

15:00
Learning Small Decision Trees

ABSTRACT. Decision trees (DT) have become an invaluable tool for providing interpretable models of data in various areas of computer science. Probably the most fundamental computational task in the context of DTs is to learn DTs from data. Here one usually aims at finding small trees since those are easier to interpret and require fewer decisions. Although learning a smallest DTs is known to be NP-hard, the problem’s behavior under natural restrictions on the data is widely open.  To improve our understanding, we provide the first parameterized complexity analysis of the problem.

Our starting point are hardness results which show that the problem is not fixed-parameter tractable when parameterized by solution size alone (i.e., size or depth of the obtained DT). We then identify natural additional and necessary restrictions (modelled by parameters) to achieve fixed-parameter tractability. Our results provide a comprehensive complexity map for the considered parameters, exhibiting the significance of each parameter. The fixed-parameter tractability result is based on a new algorithmic technique that is of independent interest. In particular, we show that learning DTs is fixed-parametertractable parameterized by size (or depth) and an additional parameter that can be shown to behave well in practise. We complement our algorithmic results with lowerbounds that allow us to arrive at an comprehensive complexity map w.r.t. all considered parameterizations.

14:00-14:50 Session 14N (VardiFest)
Location: Taub 1
14:00
The Joy of Automata

ABSTRACT. I was introduced to the elegance of automata-theoretic constructions through Prof. Vardi's seminal work on automata over infinite words and infinite trees and their role in decision procedures for temporal logics. In this talk, I will describe ongoing work on developing a theory of regular languages over series-parallel graphs motivated by distributed data stream applications. These graphs, besides sequential composition, allow both ordered ranked parallelism and unordered unranked parallelism. The latter feature means that in the corresponding automata, the transition function needs to account for an arbitrary number of predecessors by counting each type of state only up to a specified constant, thus leading to a notion of counting complexity that is distinct from the classical notion of state complexity. The resulting theory turns out to be robust: we show that both deterministic and non-deterministic automata have the same expressive power, which can be equivalently characterized by Monadic Second-Order logic as well as the graded mu-calculus, and questions such as membership, emptiness, and equivalence are all decidable.

14:00-15:30 Session 14P: Talks (2) (WPTE)
Location: Ullmann 305
14:00
On Reducing Non-Occurrence of Specified Runtime Errors to All-Path Reachability Problems of Constrained Rewriting
PRESENTER: Naoki Nishida

ABSTRACT. A concurrent program with semaphore-based exclusive control can be transformed into a logically constrained term rewrite system that is computationally equivalent to the program. In this paper, we first propose a framework to reduce the non-occurrence of a specified runtime error in the program to an all-path reachability problem of the transformed logically constrained term rewrite system. Here, an all-path reachability problem is a pair of state sets and is demonically valid if every finite execution path starting with a state in the first set and ending with a terminating state includes a state in the second set. Then, as a case study, we show how to apply the framework to the race-freeness of semaphore-based exclusive control in the program. Finally, we show a simplified variant of a proof system for all-path reachability problems.

14:30
On Transforming Rewriting-Induction Proofs for Logical-Connective-Free Sequents into Cyclic Proofs
PRESENTER: Shujun Zhang

ABSTRACT. A GSC-terminating and orthogonal inductive definition set (IDS, for short) of first-order predicate logic can be transformed into a many-sorted term rewrite system (TRS, for short) such that a quantifier-free sequent is valid w.r.t. the IDS if and only if a term equation representing the sequent is an inductive theorem of the TRS. Under certain assumptions, it has been shown that a quantifier- and cut-free cyclic proof of a sequent can be transformed into a rewriting-induction (RI, for short) proof of the corresponding term equation. The RI proof can be transformed back into the cyclic proof, but it is not known how to transform arbitrary RI proofs into cyclic proofs. In this paper, we show that for a quantifier- and logical-connective-free sequent, if there exists an RI proof of the corresponding term equation, then there exists a cyclic proof of some sequent w.r.t. some IDS such that the cyclic proof ensures the validity of the initial sequent. To this end, we show a transformation of the RI proof into such a cyclic proof, a sequent, and an IDS.

15:00
On Transforming Imperative Programs into Logically Constrained Term Rewrite Systems via Injective Functions from Configurations to Terms
PRESENTER: Naoki Nishida

ABSTRACT. To transform an imperative program into a logically constrained term rewrite system (LCTRS, for short), previous work converts a statement list to rewrite rules in a stepwise manner, and proves the correctness along such a conversion and the big-step semantics of the program. On the other hand, the small-step semantics of a programming language comprises of inference rules that define transition steps of configurations. Partial instances of such inference rules are almost the same as rewrite rules in the transformed LCTRS. In this paper, we aim at establishing a framework for plain definitions and correctness proofs of transformations from programs into LCTRSs. To this end, for the transformation in previous work, we show an injective function from configurations to terms, and reformulate the transformation by means of the injective function. The injective function maps a transition step to a reduction step, and results in a plain correctness proof.

14:00-15:30 Session 14Q (WiL)
Location: Ullmann 307
14:00
A normalized edit distance on finite and infinite words

ABSTRACT. Robustness in formal verification refers to the question: given a system S satisfying a specification Phi, how likely is it that S will still satisfy Phi, even in the presence of errors disrupting its normal behavior. The formal definition typically assumes some error model and some distance function measuring distance between words (describing computations) and languages. To define robustness for reactive systems which are modeled by infinite words, a notion of distance between infinite words is required. A common distance between finite words w1 and w2 is the edit (Levenstein) distance, that provides the minimal number of edit operations (substitute, delete or insert) that is required to transform w1 into w2. But how can it be enhanced to account for infinite words? Can we find a definition that is both intuitive and satisfies the requirements of a metric? Can it be computed at least for the case of ultimately periodic words? Efficiently? And what about computing the distance between languages of infinite words (say represented by Büchi automata)? In this talk we will see that all the questions above can be answered positively. The distance function enabling this is termed ω-NED and it is a natural extension of NED, the normalized edit distance between finite words. In the quest for answering these questions we stumbled upon the fact that while it was known that NED does not satisfy the triangle inequality for arbitrary weights, the question whether it satisfies the triangle inequality when weights are uniform was open. We managed to close this gap by proving that it does, and build on this for proving the above results. The talk is based on joint works with Joshua Grogin, Oded Margalit and Gera Weiss.

15:00
Automated Logic-Based Reasoning for Analyzing Prime Video Code

ABSTRACT. Developers working in Prime Video are constantly innovating, introducing new features, and improving the performance of the Prime Video application. Each modification of the Prime Video application code is required to pass a testing and code review process before it is delivered to customer devices. As the application evolves rapidly, it cannot be expected that manual reasoning about the code keeps up with the software complexity curve. This setting presents an opportunity for the application of automated logic-based approaches. In this abstract, we present techniques and tools we have used and developed in order to automatically reason about the the Prime Video application code. We discuss a code review bot, which runs multiple static code analysis tools, and introduce a novel static analyzer for TypeScript, which enables reasoning about deep properties in a trustworthy manner, while providing transparent actionable feedback to the developers.

14:00-15:30 Session 14R (XLoKR)

XLoKR Session 3

Location: Ullmann 202
14:00
On interactive explanations as reasoning

ABSTRACT. Recent work shows issues of consistency with explanations, with methods generating local explanations that seem reasonable instance-wise, but that are inconsistent across instances. This suggests not only that instance-wise explanations can be unreliable, but mainly that, when interacting with a system via multiple inputs, a user may actually lose confidence in the system. To better analyse this issue, in this work we treat explanations as objects that can be subject to reasoning and present a formal model of the interactive scenario between user and system, via sequences of inputs and outputs. We argue that explanations can be thought of in terms of entailment, which, we argue, should be thought of as non-monotonic. This allows: 1) to solve some considered inconsistencies in explanations; 2) to consider properties from the non-monotonic reasoning literature and discuss their desirability, gaining more insight on the interactive explanation scenario.

14:20
Should counterfactual explanations always be data instances?
PRESENTER: Francesca Toni

ABSTRACT. Counterfactual explanations (CEs) are an increasingly popular way of explaining machine learning classifiers. Predominantly, they amount to data instances pointing to potential changes to the inputs that would lead to alternative outputs. In this position paper we question the widespread assumption that CEs should always be data instances, and argue instead that in some cases they may be better understood in terms of special types of relations between input features and classification variables. We illustrate how a special type of these relations, amounting to critical influences, can characterise and guide the search for data instances deemed suitable as CEs. These relations also provide compact indications of which input features - rather than their specific values in data instances - have counterfactual value

14:40
Dialogue-Based Explanations of Reasoning in Rule-based Systems
PRESENTER: Joe Collenette

ABSTRACT. The recent focus on explainable artificial intelligence has been driven by a perception that complex statistical models are opaque to users. Rule-based systems, in contrast, have often been presented as self-explanatory. All the system needs to do is provide a log of its reasoning process and its operations are clear. We believe that such logs are often difficult for users to understand in part because of their size and complexity. We propose dialogue as an explanatory mechanism for rule-based AI systems to allow users and systems to co-create an explanation that focuses on the user’s particular interests or concerns. Our hypothesis is that when a system makes a deduction that was, in some way, unexpected by the user then locating the source of the disagreement or misunderstanding is best achieved through a collaborative dialogue process that allows the participants to gradually isolate the cause. We have implemented a system with this mechanism and performed a user evaluation that shows that in many cases a dialogue is preferred to a reasoning log presented as a tree. These results provide further support for the hypothesis that dialogue explanation could provide a good explanation for a rule-based AI system.

15:00
Clustering-Based Approaches for Symbolic Knowledge Extraction
PRESENTER: Roberta Calegari

ABSTRACT. Opaque models belonging to the machine learning world are ever more exploited in the most different application areas. These models, acting as black boxes (BB) from the human perspective, cannot be entirely trusted if the application is critical, unless there exists a method to extract symbolic and human-readable knowledge out of them.

In this paper we analyse a recurrent design adopted by symbolic knowledge extractors for BB regressors---that is, the creation of rules associated with hypercubic input space regions. We argue that this kind of partitioning may lead to suboptimal solutions when the data set at hand is high-dimensional or does not satisfy symmetric constraints. We then propose a (deep) clustering-based approach to be performed before symbolic knowledge extraction to achieve better performance with data sets of any kind.

14:30-15:30 Session 15A (DECFOML)
Location: Ullmann 104
14:30
From Modal Logic to Fluted logic

ABSTRACT. The system of modal logic K is included—under the standard translation—in three major fragments of first-order logic for which satisfiability is decidable: the two-variable fragment, the guarded fragment, and the fluted fragment. But while the first two of these have been the focus of considerable attention in recent decades, the fluted fragment has, by contrast—and notwithstanding the promise it holds for equipping modal logic with additional quantification—suffered from relative neglect. In this talk, I shall present some recent results on the fluted fragment, focusing especially on its extension with counting quantifiers and transitive relations.

14:30-15:30 Session 15B (TLLA-LINEARITY)
Location: Ullmann 302
14:30
Recursive Session Logical Relations

ABSTRACT. Program equivalence is the fulcrum for reasoning about and proving properties of programs. For noninterference, for example, program equivalence up to the secrecy level of an observer is shown. A powerful enabler for such proofs are logical relations. Logical relations only recently were adopted for session types — but exclusively for terminating languages. In this talk, I develop a logical relation for an intuitionistic linear logic session type language with general recursive types and instantiate it for proving progress-sensitive noninterference. Concurrency and non-termination pose several challenges, and I explain how the logical relation addresses them. A distinguishing feature of the development is the choice of an observational step index, which ensures compositionality and proofs of transitivity and soundness.

14:50-15:30 Session 17 (VardiFest)
Location: Taub 1
14:50
Automata-Based Quantitative Reasoning

ABSTRACT. Quantitative reasoning is synonymous with numerical methods. The techniques that come to mind to solve problems involving quantitative properties are numerical methods --- linear programming, optimization, and similar other methods. In this talk, I will introduce a novel framework, called comparators, for quantitative reasoning which is purely based in automata methods as opposed to numerical methods, and discuss their pros and cons.

15:00
LTLf Synthesis Under Environment Specifications

ABSTRACT. In synthesis, environment specifications are constraints on the environments that rule out certain environment behaviors. To solve synthesis of LTLf goals under environment assumptions, we could reduce the problem to LTL synthesis. Unfortunately, while synthesis in LTLf and in LTL have the same worst-case complexity (both are 2EXPTIME-complete), the algorithms available for LTL synthesis are much harder in practice than those for LTLf synthesis. Recently, it has been shown that when the environment specifications are in form of fairness or stability, or GR(1) formulas, we can avoid such a detour to LTL and keep the simplicity of LTLf synthesis. Furthermore, even when the environment specifications are specified in full LTL we can partially avoid this detour.

In this talk, I give a review on how to effetely tackle the synthesis problem for reachability and safety environment task under reachability and safety environment specifications by using synthesis techniques based on finite-state automata, thus exploiting the intrinsic simplicity of LTLf.

15:10
Boolean Synthesis via Decision Diagrams

ABSTRACT. Boolean synthesis is the process of synthesizing a multiple-output Boolean function from a declarative specification describing a relation between its inputs and outputs. Motivated by applications in diverse areas such as digital-circuit design, QBF certification and reactive synthesis, a number of different approaches have been developed to tackle this problem. This talk intends to give a brief overview of the ongoing development of Boolean-synthesis algorithms based on decision diagrams. This approach to Boolean synthesis is intrinsically connected to its application as a component of reactive synthesis, since decision diagrams are common in a number of successful reactive-synthesis algorithms. We will contrast the original algorithm based on Binary Decision Diagrams (BDDs) with a newly-introduced version making use of a symbolic CNF representation using Zero-Suppressed Decision Diagrams (ZDDs), and show how this alternative representation can mitigate some of the scalability limitations of BDDs. In the course of the talk we will touch on some of the algorithmic insights that went into the design of such techniques.

15:20
Strategy synthesis for Global Window PCTL
PRESENTER: Shibashis Guha

ABSTRACT. Given a Markov decision process (MDP) $M$ and a formula $\Phi$, the strategy synthesis problem asks if there exists a strategy $\sigma$ s.t. the resulting Markov chain $M[\sigma]$ satisfies $\Phi$. This problem is known to be undecidable for the probabilistic temporal logic PCTL. We study a class of formulae that can be seen as a fragment of PCTL where a local, bounded horizon property is enforced all along an execution. Moreover, we allow for linear expressions in the probabilistic inequalities. This logic is at the frontier of decidability, depending on the type of strategies considered. In particular, strategy synthesis is decidable when strategies are deterministic while the general problem is undecidable.

15:00-16:30 Session 18 (ASL)

Heaps and Concurrency

Location: (virtual only)
15:00
Proving Logical Atomicity using Lock Invariants
PRESENTER: Shengyi Wang

ABSTRACT. Logical atomicity has been widely accepted as a specification format for data structures in concurrent separation logic. While both lock-free and lock-based data structures have been verified against logically atomic specifications, most of the latter start with atomic specifications for the locks as well. In this paper, we compare this approach with one based on older lock-invariant-based specifications for locks. We show that we can still prove logically atomic specifications for data structures with fine-grained locking using these older specs, but the proofs are significantly more complicated than those that use atomic lock specifications. Our proof technique is implemented in the Verified Software Toolchain, which relies on older lock specifications for its soundness proof, and applied to C implementations of lock-based concurrent data structures.

15:30
Deciding Separation Logic with Heap Lists

ABSTRACT. Heap list is a data structure used by many implementations of memory allocators to organize the heap part of the program’s memory. Operations on heap lists employ pointer arithmetic and therefore, reasoning about the correctness of heap list implementations requires to deal simultaneously with pointer arithmetic inside the inductive definition specifying the heap list. In this work, we investigate decision problems for SLAH, a separation logic fragment that allows pointer arithmetic and an inductive definition for heap lists, thus enabling specification of properties for programs in memory allocators. Pointer arithmetic inside the inductive definition for heap lists is challenging for automated reasoning. We tackle this challenge and achieve decision procedures for both satisfiability and entailment of SLAH formulas. The crux of our decision procedure for satisfiability is to compute summaries of the inductive definition for heap lists. We show that although the summary is naturally expressed as an existentially quantified non-linear arithmetic formula, it can actually be transformed into an equivalent linear arithmetic formula. The decision procedure for entailment, on the other hand, has to match and split the spatial atoms according to the arithmetic relation between address variables. We report on the implementation of these decision procedures and their good performance in solving problems issued from the verification of building block programs manipulating heap-lists used in memory allocators.

16:00
Polishing a Rough Diamond: An Enhanced Separation Logic for Heap Space under Garbage Collection
PRESENTER: Alexandre Moine

ABSTRACT. We present a program logic for reasoning about heap space in a λ-calculus equipped with garbage collection and mutable state. Compared with prior work by Madiot and Pottier, we target a higher-level language. This requires us to introduce an assertion to reason about the roots held by the evaluation context.

15:30-16:00Coffee Break
16:00-17:30 Session 19A: Modelling and Applications (ASPOCP)

Modelling and Applications

Location: Ullmann 311
16:00
Computing H-Partitions in ASP and Datalog
PRESENTER: Nicolas Lecomte

ABSTRACT. An H-partition of a finite undirected simple graph G is a labeling of G's vertices such that the constraints expressed by the model graph H are satisfied. For every model graph H, it can be decided in non-deterministic polynomial time whether a given input graph G admits an H-partition. Moreover, it has been shown by Dantas et al. that for most model graphs, this decision problem is in deterministic polynomial time. In this paper, we show that these polynomial-time algorithms for finding H-partitions can be expressed in Datalog with stratified negation. Moreover, using the answer set solver Clingo, we have conducted experiments to compare straightforward guess-and-check programs with Datalog programs. Our experiments indicate that in Clingo, guess-and check programs run faster than their equivalent Datalog programs.

16:20
Translating Definitions into the Language of Logic Programming: A Case Study

ABSTRACT. In the process of creating a declarative program, the programmer transforms a problem specification expressed in a natural language into an executable specification. We study the case when the given specification is expressed by a mathematically precise definition, and the goal is to write a program for an answer set solver.

16:40
A normative model of explanation for binary classification legal AI and its implementation on causal explanations of Answer Set Programming

ABSTRACT. In this paper, I provide a normative model of explanation for the output of AI algorithms used in legal practice. I focus on binary classification algorithms due to their extensive use in the field. In the last part of the paper, I examine the model’s compatibility with causal explanations provided by Answer Set Programming (ASP) causal models.

The motivation for proposing this model is the necessity for providing explanations for the output of legal AI. From the multiplicity of arguments supporting that necessity, the proposed model addresses the argument that legal AI’s output should be objectionable. That can be achieved only if the explanation of the output has a form that makes it amenable to evaluation by legal practitioners. Hence, I firstly provide a normative model for the explanations used by legal practitioners in their practice (CLM_LP) and then I provide the normative model for the explanations of legal AI’s outputs (EXP_BC) that I base on CLP_LP. CLP_LP in its turn is based on the Classical Model of Science (CMS) which is the normative model of explanations that every “proper” science should follow according to philosophers throughout history.

Following the introduction of EXP_BC, I propose three degrees of explainability regarding binary classification explanations according to their fidelity to EXPBC. I further argue that machine learning can not satisfy even the lowest degree of explainability, while rule-based AI - like ASP-based AI - can satisfy the highest degree. Concluding, I propose an ASP methodology in-progress that can use EXP_BC to provide causal explanations. In the proposed ASP methodology, I am using causal graphs as models of causal inference as well as a metaphysically neutral interventionist account of causation.

CMS_LP and EXP_BC are normative models that are based on the derivation relation of subsumptive-deductive inference among norms and propositions. On the other hand, the proposed ASP methodology is based on causal - and hence non-subsumptive-deductive - relations among norms and propositions that supervene on the subsumptive-deductive ones. Consequently, the motivation behind proposing this specific ASP methodology is to establish a precedent for a unification of different types of explanations (e.g., deductive and causal explanations) as well as to bridge the gaps among computational modelling, actual practice, and the philosophical underpinnings of a domain of expertise (law in this case).

17:00
Assumable Answer Set Programming

ABSTRACT. For modeling the assumption-based intelligent agents who make assumptions and use them to construct their belief sets, this paper proposes a logic programming language AASP (Assumable Answer Set Programming) by extending ASP (Answer Set Programming). Rational principles of assumption-based reasoning are given to define the semantics of the AASP program. The relation of AASP to some existing default formalism implies that AASP can be used to model and solve some interesting problems with incomplete information in the framework of assumption-based reasoning.

16:00-17:30 Session 19B (CHANGE)

This is a KR-2022 tutorial on Sunday July 31st: two 1.5h sessions in afternoon with a break: 14:00-15:30 and 16:00-17:30

Location: Ullmann 200
16:00
Tutorial: Hybrid Temporal Situation Calculus for Planning with Continuous Processes: Semantics

ABSTRACT. This is the 2nd part of the KR-2022 tutorial on July 31st: the 1st part is from 14:00-15:30. See details at  https://www.cs.ryerson.ca/~mes/HTSC-Tutorial-KR2022/

16:00-16:30 Session 19C (DECFOML)
Location: Ullmann 104
16:00
Decidable Fragments of Term Modal Logic

ABSTRACT. Term Modal logic (TML) is closely related to First Order Modal Logic (FOML). TML allows modal operators to be indexed by terms which is suitable to model settings where the agent set is unbounded. A typical formula in TML is of the form \forall x \exists y~\Box_x P(x,y). Similar to FOML, TML is also highly undecidable. For instance just having propositions as atoms leads to undecidability. We will discuss the close correspondence between TML and FOML and give translations that help us to identify decidable fragments of TML. As a contrast between the two logics, we show that the two variable fragment (without constants, equality) of TML is decidable, whereas the two variable fragment of FOML is known to be undecidable. (Joint work with R. Ramanujam)

16:00-17:00 Session 19D (FoMLAS)

FoMLAS Session 4

Location: Ullmann 201
16:00
Counter-Example Guided Neural Network Compression Refinement (CEG4N)

ABSTRACT. Neural networks are essential components of learning-based software systems. However, their high compute, memory, and power requirements make using them in low resources domains challenging. For this reason, neural networks are often compressed before deployment. Existing compression techniques tend to degrade the network accuracy. We propose Counter-Example Guided Neural Network Compression Refinement (CEG4N). This technique combines search-based quantization and equivalence verification: the former minimizes the computational requirements, while the latter guarantees that the network’s output does not change after compression. We evaluate CEG4N on a diverse set of benchmarks that include large and small networks. Our technique successfully compressed 80% of the networks in our evaluation while producing models with up to 72% better accuracy than state-of-the-art techniques.

16:30
Goal-Aware RSS for Complex Scenarios via Program Logic

ABSTRACT. I'd like to present the content of the attached manuscript, currently in press for IEEE Transaction on Intelligent Vehicles.

The work establishes a formal logic foundation for RSS, a recent methodology for "theorem proving" safety of automated driving systems. RSS is proposed by researchers at Intel/Mobileye and is attracting a lot of interests from industry and academia alike, but its formal aspects have been underdeveloped (although RSS is really about theorem proving). In the paper, we introduce a logic dFHL---it can be seen as an adaptation of Platzer's differential dynamic logic---and a workflow for deriving formally verified safety rules. This extension of RSS allows one to prove the safety of much more complicated scenarios than RSS could previously handle. The workflow is experimentally evaluated, too.

The work treats automated vehicles as black boxes, so that it accommodates ML components that cannot be logically modeled. I believe the work will be of the audience's interest 1) because of the methodology that black-box models are "verified" by imposing formally guaranteed rules/contracts, and 2) automated driving is a hot application topic.

16:00-17:30 Session 19E (HoTT/UF)
Location: Ullmann 303
16:00
Internal languages of diagrams of ∞-toposes

ABSTRACT. Homotopy type theory (HoTT) is a handy language for reasoning about an ∞-topos. However, sometimes we have more than one ∞-toposes related to each other by some functors, in which case plain HoTT is not sufficiently rich because the actions of the functors are not internalized. In this talk, we consider a certain class of diagrams of ∞-toposes for which plain HoTT remains a sufficiently rich internal language. We show that a special form of an inverse diagram of ∞-toposes is reconstructed internally to its oplax limit via lex modalities. Then plain HoTT as an internal language of the oplax limit can be used for reasoning about the original diagram.

17:00
An Experiment with 2LTT Features in Agda
16:00-17:30 Session 19F (IFIP-WG1.6)

Business Meeting (IFIP-WG 1.6)

Location: Ullmann 306
16:00-17:30 Session 19G: verification of black-box ML models (LMML)
Location: Ullmann 101
16:00
Verification of Binarized Neural Networks: Challenges and Opportunities

ABSTRACT. Analyzing the behavior of neural networks is one of the most pressing challenges in deep learning. Binarized Neural Networks are an important class of networks that allow equivalent representation in Boolean logic and can be analyzed formally with logic-based reasoning tools like SAT solvers. Such tools can be used to answer existential and probabilistic queries about the network, perform explanation generation, etc. However, the main bottleneck for all methods is their ability to reason about large BNNs efficiently. In this work, we analyze architectural design choices of BNNs and discuss how they affect the performance of logic-based reasoners. We discuss changes to the BNN architecture and the training procedure to get a simpler network for SAT solvers without sacrificing accuracy on the primary task. We will also discuss new results on Binarized Neural Networks and their training procedures.

16:45
Verification of Realistic Neural Networks

ABSTRACT. After the discovery of adversarial examples, provable robustness guarantees for neural networks have received increasing attention from the research community. While these efforts have been successful for relatively small or heavily regularized models with limited standard accuracy, more precise models have proven much harder to analyze. Recently two paradigms have emerged to tackle this challenge: while Branch-and-Bound-based approaches break down a hard verification problem into increasingly many easier ones until they can be solved, Multi-Neuron constraints allow the verification of individual harder problems. In this talk, we discuss these two paradigms and how they can be combined to obtain a novel and highly effective verification framework.

Bio: Mark N. Müller is a Ph.D. student in the Secure, Reliable, and Intelligent Systems Lab at ETH Zurich, advised by Prof. Martin Vechev. His research focuses on different approaches to obtaining accurate models with formal robustness guarantees.

16:00-17:30 Session 19H (LogTeach)
Location: Ullmann 301
16:00
Logic for CS Undergraduates: A Sketch

ABSTRACT. At the undergraduate level, there are elementary approaches to logic that are neither philosophically demanding nor inapplicable in a computational context. I illustrate this with a sketch of logical application about computation and its appreciation in hopefully practical ways. It depends on the utility of distinguishing formal and mathematical entities from those empirical ones perceived in nature and society.

16:30
Logic for Students of Modern Artificial Intelligence

ABSTRACT. “Modern Artificial Intelligence” is a branch of Computer Science which triggered a hype in CS research and education and which is associated to key words as big data and deep learning. It is worth noting that “traditional Artificial Intelligence” is - with the exception of neural networks - conceptionally rooted in Logic: Expert Systems, Logic Programming, SAT solving, etc. are topics which could not be taught without Logic as fundamental discipline. One has, however, to admit, that the foundational discipline for modern AI is Statistics, rather than Logic. We will discuss three challenges for modern AI, which give evidence that Logic remains an important topic for Students in this area.

1. Notion of computability. Currently, there is no theory available which would allow to reason about the scope and limits of the computational power of modern AI. Theorems like those of Church and Turing are fundamental for the understanding of deterministic computation, including the important area of computational complexity. Apparently, the statistical foundation is not suitable to provide such a theory for modern AI, and for any progress in this direction, Logic should be crucial.

2. Hybrid AI. Despite all the success of modern AI, as in image recognition, translation, medical applications, and other areas, a substantial extension of the field of applications will require a combination of it with classical AI, i.e., the possibility to recurse to logical reasoning.

3. Causality. The problem of causality—or better: the lack of tools for the analysis of causality—is widely discussed in modern AI. Although even Logic still struggles with a compelling theory of causality, it is evident that any future solution will involve logical components, even if it would only be for crosschecking.

Thus, a CS curriculum for AI students which neglects Logic, bears the risk to disconnect these students from developments essential for “next generation AI”.

16:00-17:30 Session 19I (PC)
Location: Ullmann 309
16:00
Strong proof systems based on algebraic circuits

ABSTRACT. Grochow and Pitassi (2018) introduced IPS, the strongest algebraic proof system based on algebraic circuits. This is, however, not the only algebraic proof system of this sort: circuits can be restricted and can be represented differently. Additionally, similar proof systems based on inequalities can be considered.

In this talk I survey such systems and relations between them and announce recent developments.

17:00
On combinatorial principles coming from semi-algebraic proof systems
16:00-17:00 Session 19J (PCCR)
Location: Ullmann 203
16:00
The Tournament Fixing Problem

ABSTRACT. A knockout tournament is a standard format of competition, ubiquitous in sports, elections, and decision-making. In this context, one central question, captured by the Tournament Fixing Problem, is whether the tournament can be conducted in a way that makes our favorite player win. In this talk, we will discuss the state-of-the-art, with emphasis on parameterized analysis, of the Tournament Fixing problem and its variants, as well as take a closer look at a few recent advances.

16:00-17:30 Session 19K (TLLA-LINEARITY)
Location: Ullmann 302
16:00
Copromotion and Taylor Approximation (Work in Progress)
PRESENTER: Jean-Simon Lemay

ABSTRACT. Quantitative models of Linear Logic, where non-linear proofs are interpreted by power series, hold a special place in the semantics of linear logic. They are at the heart of Differential Linear Logic and its application to probabilistic programming and new proofs methods in lambda-calculus. In this abstract, we explore a refinement of DiLL with a copromotion rule, modelized by categories where all maps are quantitative.

16:30
Denotational semantics driven simplicial homology?

ABSTRACT. Linear logic’s coherent spaces are a simple kind of abstract simplicial complexes (asc, for short). These combinatorially encode a topological space that is usually studied via (simplicial) homology. We can easily associate an asc with any formula of LL under some semantics, in such a way proofs become the faces of the asc and such that this asc satisfactorily represents the “space of the (interpretations of the) proofs” of the formula. Starting from this remark, we discuss the natural question of defining a type-isomorphic invariant homology for those asc’s.

17:00
Bottom-Up Sequentialization of Unit-Free MALL Proof Nets
PRESENTER: Rémi Di Guardia

ABSTRACT. We propose a new proof of sequentialization for the proof nets of unit-free multiplicative-additive linear logic of Hughes & Van Glabbeek. This is done by adapting a method from unit-free multiplicative linear logic, showing the robustness of this approach.

16:00-16:40 Session 19L (VardiFest)
Location: Taub 1
16:00
Logic and Languages for Representation Learning

ABSTRACT. Moshe's has made key contributions in the areas of logic and formal methods, and more recently, he has been one of the main voices discussing publicly the social impact of modern computer technology and AI. I propose to talk about two themes that are related to these. The first is about recent work that highlights the role of formal languages and logic in representation learning. The second is about my experience teaching an optional, upper division course that I developed, called ``Social and technological change'' related to Moshe's concerns.

16:10
Natural Autoencoding
PRESENTER: Assaf Marron

ABSTRACT. The proposed theory of survival of the fitted [1] explains the evolution of species in the biosphere based on sustainability of multi-scale interaction networks: interaction patterns are what sustains the network and its constituents. Winning in struggles, and having reproductive advantages as associated with Darwinian and Neo-Darwinian survival of the fittest and Natural Selection is secondary. We define a species as the collective of organisms that share a set of fundamental, core, sustaining interactions, which we term the species interaction code. Here (see also [2]), we show that interaction codes are formed, and subsequently evolve, in response to genetic, environmental and other changes, in a process that in some ways relates to the well-studied concepts of autoencoding or dimensionality reduction in machine learning. In a unique autoencoding process, which we term natural autoencoding, every biological process or interaction in the entire biosphere takes part in multiple functions: encoding existing organisms and interactions; decoding such codes into networks of sustained ecosystems, and reacting to innovation and change by adjusting species interaction codes. DNA is only one element in this mechanism. One result of this process is that species interaction codes both map and construct the species environment.

Natural autoencoding differs from typical artificial autoencoding; natural autoencoding is: 1. Sustainment-guided: Typical artificial autoencoders iteratively optimize a loss function: each input is encoded, its representation is then decoded, and the differences between inputs and corresponding reconstructed outputs are computed. In natural autoencoding, evolving species interaction codes stabilize around de-facto sustained interaction patterns. This is retrospective; there is no sustainability assessment of a present state. What works works. 2. Multi-species and multi-scale: Natural autoencoding concurrently generates many species interaction codes, at multiple scales/levels, including biochemical, cellular, and organism. 3. Recursive and Reflective: The biosphere drives its own evolution: the input to the "natural autoencoder" is the entire biosphere, incorporating all available outputs of previous steps, and the autoencoder itself.

Our team is now designing computer-based autoencoders to model natural autoencoding.

Principles associated with Natural Selection and survival of the fittest have significantly affected human political, social, economical, and educational norms. Survival-of-the-fitted by natural autoencoding explains why a habitable environment requires diverse networked interactions. This perspective can help drive much needed changes in human behavior, both environmental and social, including the kind of social responsibility advocated by Moshe Vardi [3].

Acknowledgement: We thank D. Harel, S. Szekely and G. Frankel for valuable contributions.

16:20
From Logic to Neurosymbolic AI

ABSTRACT. in the 1990s, during my PhD studies at Imperial College, London. One of his particular papers, on why modal logic is robustly decidable clearly called my attention, as he neatly explained not only the paper's main topic but also handed the reader the understanding of model checking and validity \cite{Vardi96}. This paper inspired our work on Connectionist Modal Logics years later \cite{DAVILAGARCEZ200734} and our approach to Neurosymbolic AI in general \cite{3rdwave}. Several years later, we invited him to be a keynote at the Brazilian National Conference on Computer Science. Since then, we have become friends and met several times. Our recent collaboration aims at better understanding and explaining machine learning \cite{Tavares2020}, showing that neural learning and logic can lead to richer neurosymbolic AI models, and at integrating machine learning and reasoning toward more robust AI \cite{Lamb2020,PratesALLV19}. In this talk, I will highlight Vardi's contribution to this endeavour and the inspiration that I have drawn from his exquisite yet humongous contribution to computer science, logic, and artificial intelligence.

16:30
An Epistemic Logic for modelling Cooperative Agents

ABSTRACT. We intend to present our line of work based on epistemic logic, lasting since some years, where we cope with group dynamics in multi-agent settings. We have proposed and extended over time the epistemic logic \textit{L-DINF}, that allows one to formalize such dynamics, with special attention to explainability, modularity, and to the ability of modelling cognitive aspects related to agents' operation within the application at hand.

16:00-17:30 Session 19M: Talks (3) (WPTE)
Location: Ullmann 305
16:00
Towards Corecursion Without Corecursion in Coq
PRESENTER: Vlad Rusu

ABSTRACT. Coinduction is an important concept in functional programming. To formally prove properties of corecursive functions one can try to define them in a proof assistant such as Coq. But there arelimitations on the  functions that can be defined. In particular, corecursive calls must occur directly under a call to a constructor, without any  calls to other recursive functions in between. In this paper we show how a partially ordered set endowed with a notion of approximation can be organized as a Complete Partial Order. This makes it possible to define corecursive functions without using Coq's corecursion,  as the unique solution of a fixpoint equation, thereby escaping Coq's builtin limitations.

16:30
Towards a Refactoring Tool for Dependently-Typed Programs (work in progress)

ABSTRACT. While there is considerable work on refactoring functional programs, so far this had not extended to dependently-typed programs. In this paper, we begin to explore this space by looking at a range of transformations related to indexed data and functions.

16:45
Refactoring Steps for P4 (work in progress)
PRESENTER: Máté Tejfel

ABSTRACT. P4 is a domain specific programming language developed mainly to describe data plane layer of packet processing algorithms of different network devices (e.g. switches, routers). The language has special domain-specific constructs (e.g. match/action tables) that cannot be found in other languages and as such there is no existing methodology yet for refactoring these constructs. The paper introduces the definition of some P4 specific refactoring steps. The proposed steps are implemented using P4Query an analyzer framework dedicated to P4.

16:00-17:00 Session 19N (WiL)
Location: Ullmann 307
16:00
Fixed-Template Promise Model Checking Problems
PRESENTER: Silvia Butti

ABSTRACT. The fixed-template constraint satisfaction problem (CSP) can be seen as the problem of deciding whether a given primitive positive first-order sentence is true in a fixed structure (also called model). We study a class of problems that generalizes the CSP simultaneously in two directions: we fix a set L of quantifiers and Boolean connectives, and we specify two versions of each constraint, one strong and one weak. Given a sentence which only uses symbols from L, the task is to distinguish whether the sentence is true in the strong sense, or it is false even in the weak sense.

We classify the computational complexity of these problems for the existential positive equality-free fragment of first-order logic, and we prove some upper and lower bounds for the positive equality-free fragment. The partial results are sufficient, e.g., for all extensions of the latter fragment.

16:20
Two Dimensional Bounded Model Checking for Unbounded Client-Server Systems
PRESENTER: Tephilla Prince

ABSTRACT. Bounded model checking (BMC) is an efficient formal verification technique which allows for desired properties of a software system to be checked on bounded runs of an abstract model of the system. The properties are frequently described in some temporal logic and the system is modeled as a state transition system. In this paper we propose a novel counting logic, L_C, to describe the temporal properties of client-server systems with an unbounded number of clients. We also propose two dimensional bounded model checking (2D-BMC) strategy that uses two distinguishable parameters, one for execution steps and another for the number of tokens in the net representing a client-server system, and these two evolve separately, which is different from the standard BMC techniques in the Petri Nets formalism. This 2D-BMC strategy is implemented in a tool called DCModelChecker which leverages the 2D-BMC technique with a state-of-the-art satisfiability modulo theories (SMT) solver Z3. The system is given as a Petri Net and properties specified using L_C are encoded into formulas that are checked by the solver. Our tool can also work on industrial benchmarks from the Model Checking Contest (MCC). We report on these experiments to illustrate the applicability of the 2D-BMC strategy.

16:40
A note on Türing 1936

ABSTRACT. Türing historical article of 1936 is the result of a special endeavor focused around the factuality of a general process for algorithmic computation. The Türing machine as a universal feasibility test for computing procedures is applied up to closely examining what are considered to be the limits of computation itself. In this regard he claims to have defined a number which is not computable, arguing that there can be no machine computing the diagonal on the enumeration of the computable sequences. This article closely examines the original 1936 argument, displaying how it cannot be considered a demonstration, and that there is indeed no evidence of such a defined number that is not computable.

16:00-18:00 Session 19P (XLoKR)

XLoKR Session 4

Location: Ullmann 202
16:00
Invited Talk: Explanation Generation in Applications of Answer Set Programming

ABSTRACT. TBA

17:00
Explaining Soft-Goal Conflicts through Constraint Relaxations
PRESENTER: Rebecca Eifler

ABSTRACT. Recent work suggests to explain trade-offs between soft goals in terms of their conflicts, i. e., minimal unsolvable soft-goal subsets. But this does not explain the conflicts themselves: Why can a given set of soft goals not be jointly achieved? Here we approach that question in terms of the underlying constraints on plans in the task at hand, namely resource availability and time windows. In this context, a natural form of explanation for a soft-goal conflict is a minimal constraint relaxation under which the conflict disappears (“if the deadline was 1 hour later, it would work”). We explore algorithms for computing such explanations. A baseline is to simply loop over all relaxed tasks and compute the conflicts for each separately. We improve over this by two algorithms that leverage information -- conflicts, reachable states -- across relaxed tasks. We show that these algorithms can exponentially outperform the baseline in theory, and we run experiments confirming that advantage in practice.

17:20
Stepwise Explanations of Unsatisfiable Constraint Programs (Extended abstract)
PRESENTER: Ignace Bleukx

ABSTRACT. Unsatisfiable constraint satisfaction models (CSPs) are a common sight when formulating a constraint model. For the modeller, only a handful of tools are available to get an explanation why his model is UNSAT, making it hard to understand the cause of the unsatisfiability and to debug the model. Existing tools rely on extracting a subset of constraints which are in itself not satisfiable (MUS). Although some obvious bugs in the model can be detected easily using such techniques, in other cases the extracted MUS involves too many constraints to understand their dynamics and the cause of unsatisfiability may not be made clear. For explaining solutions of Boolean satisfaction problems, recent years has seen the emerge of stepwise explanations. Such explanation sequences provide justifications for Boolean facts in small steps a user can understand. In this work, we aim to generate a sequence which derives the unsatisfiability of a constraint program. This requires several orthogonal extensions on previous work. In this paper, we extend the notion of explainable facts to integer domains; introduce the concept of simple and non-simple constraints and lasty, we propose an algorithm which can be used to derive the unsatisfiability of a constraint program.

16:45-17:05 Session 20 (VardiFest)
Location: Taub 1
16:45
On the Effectiveness of Logic in Algorithmic Graph Theory

ABSTRACT. Logic and graph theory form the basis of a fascinating interplay between combinatorial structures and their algorithmic properties. Some of the most important examples of this are relations between logic, hypergraphs and constraint satisfaction problems, graph width parameters, and connections with monadic second-order logic.

The interaction between logic and graphs have had an important impact on applications in computer science, from database query optimization to temporal reasoning and verification, as well as identifying tractable cases of their relevant constraint satisfaction problems.

In this brief talk, we will explore a number of themes in which logic and algorithmic graph theory have influenced each other.

16:55
Computability and Complexity over Finite Unordered Structures; e.g., Graphs (1979-1982)

ABSTRACT. I will discuss the highlights of two papers written with the late Ashok Chandra between 1979 and 1982. The subject is computability and complexity on finite unordered structures (e.g., graphs). Despite the original papers being couched in the framework of relational databases, the ideas and results constitute a rather elegant generalization of Turing computability and classical complexity theory. They have also triggered a large amount of research in finite model theory and descriptive complexity, including seminal work of Moshe Vardi.

Among other things, I will describe a tantalizing problem we posed in the second paper, which is still unsolved after over 40 years, concerning a fundamental issue around PTIME.

17:00-17:30 Session 21A (ASL)

Report on SL-COMP

Chair:
Location: (virtual only)
17:00
Report on SL-COMP
17:10-18:00 Session 22 (VardiFest)

Invited Talk by Orna Kupferman

Location: Taub 1
17:10
On how the past illuminates the future

ABSTRACT. Nondeterminism allows us to guess the future. The talk surveys the interesting phenomenon of how remembering the past can help us make successful guesses (especially if we are automata, but not only).

17:30-18:30 Session 24A: CAUSAL and EELP (ASPOCP)

CAUSAL and EELP

Location: Ullmann 311
17:30
Correct Causal Inference in Probabilistic Logic Programming
17:50
A Casual Perspective on AI Deception
18:10
Epistemic Logic Programs: a Novel Perspective and some Extensions
17:30-19:00 Session 24C (LogTeach)
Location: Ullmann 301
17:30
The Herbrand Manifesto: Thinking Inside the Box
PRESENTER: Vinay Chaudhri

ABSTRACT. The traditional semantics for relational logic (sometimes called Tarskian semantics) is based on the notion of interpretations of constants in terms of objects external to the logic. Herbrand semantics is an alternative that is based on truth assignments for ground sentences without reference to external objects. Herbrand semantics is simpler and more intuitive than Tarskian semantics; and, consequently, it is easier to teach and learn. Moreover, it is stronger than Tarskian semantics. For example, while it is not possible to finitely axiomatize integer arithmetic with Tarskian semantics, this can be done easily with Herbrand semantics. The downside is a loss of some common logical properties, such as compactness and inferential completeness. However, there is no loss of inferential power - anything that can be deduced according to Tarskian semantics can also be deduced according to Herbrand semantics. Based on these results, we argue that there is value in using Herbrand semantics for relational logic in place of Tarskian semantics. It alleviates many of the current problems with relational logic and ultimately may foster a wider use of relational logic in human reasoning and computer applications.

18:30
Teaching logic to CS undergraduates
PRESENTER: Thomas Zeume

ABSTRACT. In this position paper we (1) provide context on how logic is (typically) anchored in CS curricula of German universities, (2) describe an introductory level logic course for CS students taught at TU Dortmund and Ruhr University Bochum designed to appeal to a broad spectrum of CS students, (3) report on experiences in establishing this course as a compulsory module in the CS Bachelor curriculum at Ruhr University Bochum. Additionally we (4) sketch a vision for leveraging modern technology in logic instruction.