View: session overviewtalk overview
09:00  PRESENTER: Laurent Michel ABSTRACT. Multivalued decision diagrams (MDDs) have recently emerged as an effective technology for solving combinatorial optimization problems. In this tutorial, we focus on the use of decision diagrams in the context of constraint programming. It consists of two parts: 1) an overview of MDDbased constraint programming, and 2) a description and demonstration of the Haddock system to specify and compile MDDpropagators within a constraint programming solver. MDDbased constraint propagation was first introduced by Andersen et al. in 2007. They introduced the concept of relaxed decision diagrams as a mechanism to enhance the propagation of information between constraints: Instead of using a domain store, they proposed using an ‘MDD store’ to represent and communicate more structural relationships between variables. As a consequence, the search tree size can be dramatically reduced. Since then, many papers have further studied the use of decision diagrams in constraint programming, as well as in other related areas such as mathematical programming. The first part of this tutorial will describe the core concepts of MDDbased constraint propagation and provide examples on global constraints such as AllDifferent, Among, and Disjunctive. In the second part, we will describe the system Haddock. It provides a specification language and implementation architecture for automatic decision diagram compilation. Haddock provides the rules for refining (splitting) and filtering (propagating) MDD abstractions. In addition, the language allows to specify heuristics to guide the compilation process. Haddock allows the user to declare multiple MDDs within a CP model, each associated with a suitable set of constraints, and automatically integrates the MDD propagators into the constraint propagation fixpoint computation. We will give an overview of the Haddock system, provide examples on constraints such as Among, Gcc, and (weighted) Sum, and describe example applications. Participants are invited to download the Haddock code to work on handson examples during the tutorial.

09:00  On Quantitative Testing of Samplers PRESENTER: Mate Soos ABSTRACT. The problem of uniform sampling is, given a formula F , sample solutions of F uniformly at random from the solution space of F . Uniform sampling is a fundamental problem with widespread applications, including configuration testing, bug synthesis, function synthesis, and many more. Stateoftheart approaches for uniform sampling have a tradeoff between scalability and theoretical guarantees. Many state of the art uniform samplers do not provide any theoretical guarantees on the distribution of samples generated, however, empirically they have shown promising results. In such cases, the main challenge is to test whether the distribution according to which samples are generated is indeed uniform or not. Recently, Chakraborty and Meel (2019) designed the first scalable sampling tester, Barbarik, based on a greybox sampling technique for testing if the distribution, according to which the given sampler is sampling, is close to the uniform or far from uniform. They were able to show that many offtheself samplers are far from a uniform sampler. The availability of Barbarik increased the testdriven development of samplers. More recently, Golia, Soos, Chakraborty and Meel (2021), designed a uniform like sampler, CMSGen, which was shown to be accepted by Barbarik on all the instances. However, CMSGen does not provide any theoretical analysis of the sampling quality. CMSGen leads us to observe the need for a tester to provide a qualitative answer to determine the quality of underlying samplers instead of merely a quantitative answer of Accept or Reject. Towards this goal, we design a computational hardnessbased tester ScalBarbarik that provides a more nuanced analysis of the quality of a sampler. ScalBarbarik allows more expressive measurement of the quality of the underlying samplers. We empirically show that the stateoftheart sampler, CMSGen is not accepted as a uniformlike sampler by ScalBarbarik. Furthermore, we show that ScalBarbarik can be used to design a sampler that can achieve balance between scalability and uniformity. 
09:30  On the Enumeration of Frequent High Utility Itemsets: A symbolic AI Approach PRESENTER: Said Jabbour ABSTRACT. Mining interesting patterns from data is a core part of the data mining world. High utility mining, an active research topic in data mining, aims to discover valuable itemsets with high profit (e.g., cost, risk). However, the measure of interest of an itemset must primarily reflect not only the importance of items in terms of profit, but also their occurrence in data in order to make more crucial decisions. Some proposals are then introduced to deal with the problem of computing high utility itemsets that meet a minimum support threshold. However, in these existing proposals, all transactions in which the itemset appears are taken into account, including those in which the itemset has a low profit. So, no additional information about the overall utility of the itemset is taken into account. This paper addresses this issue by introducing a SATbased model to efficiently find the set of all frequent high utility itemsets with the use of a minimum utility threshold applied to each transaction in which the itemset appears. More specifically, we reduce the problem of mining frequent high utility itemsets to the one of enumerating the models of a formula in propositional logic, and then we use stateoftheart SAT solvers to solve it. Afterwards, to make our approach more efficient, we provide a decomposition technique that is particularly suitable for deriving smaller and independent subproblems easy to resolve. Finally, an extensive experimental evaluation on various popular realworld datasets shows that our method is fast and scale well compared to the stateofthe art algorithms. 
10:00  From CrossingFree Resolution to MaxSAT Resolution PRESENTER: Mohamed Sami Cherif ABSTRACT. Adapting a resolution proof for SAT into a MaxSAT resolution proof without considerably increasing the size of the proof is an open problem. Readonce resolution, where each clause is used at most once in the proof, represents the only fragment of resolution for which an adaptation using exclusively MaxSAT resolution is known and trivial. Proofs containing non readonce clauses are difficult to adapt because the MaxSAT resolution rule replaces the premises by the conclusions. This paper contributes to this open problem by defining, for the first time since the introduction of MaxSAT resolution, a new fragment of resolution whose proofs can be adapted to MaxSAT resolution proofs without substantially increasing their size. In this fragment, called crossingfree resolution, non readonce clauses are used independently to infer new information thus enabling to bring along each non readonce clause while unfolding the proof until a substitute is required. 
09:00  Decision problems for linear logic with least and greatest fixed points PRESENTER: Abhishek De ABSTRACT. Linear logic is an important logic for modelling resources and decomposing computational interpretations of proofs. Decision problems for fragments of linear logic exhibiting `infinitary' behaviour (such as exponentials) are notoriously complicated. In this work, we address the decision problems for variations of linear logic with fixed points (muMALL), in particular, recent systems based on `circular' and `nonwellfounded' reasoning. In this paper, we show that muMALL is undecidable. More explicitly, we show that the general nonwellfounded system is Pi01hard via a reduction to the nonhalting of Minsky machines, and thus is strictly stronger than its circular counterpart (which is in Sigma01). Moreover, we show that the restriction of these systems to theorems with only the least fixed points is already Sigma01complete via a reduction to the reachability problem of alternating vector addition systems with states. This implies that both the circular system and the finitary system (with explicit (co)induction) are Sigma01complete. 
09:30  PRESENTER: Gilles Dowek ABSTRACT. We prove a linearity theorem for an extension of linear logic with addition and multiplication by a scalar: the proofs of some propositions in this logic are linear in the algebraic sense. This work is part of a wider research program that aims at defining a logic whose proof language is a quantum programming language. 
10:00  A Graphical Proof Theory of Logical Time PRESENTER: Matteo Acclavio ABSTRACT. Logical time is a partial order over events in distributed systems, constraining which events precede others. Special interest has been given to seriesparallel orders since they correspond to formulas constructed via the two operations for "series" and "parallel" composition. For this reason, seriesparallel orders have received attention from proof theory, leading to pomset logic, the logic BV, and their extensions. However, logical time does not always form a seriesparallel order; indeed, ubiquitous structures in distributed systems are beyond current proof theoretic methods. In this paper, we explore how this restriction can be lifted. We design new logics that work directly on graphs instead of formulas, we develop their proof theory, and we show that our logics are conservative extensions of the logic BV. 
50th anniversary of the birth of Prolog: Some reflections on Prolog's Evolution, Status, and Future [LINK TO SLIDES]
Abstract: This year we celebrate Prolog's 50th anniversary, and the continued relevance of Prolog and logic programming as a basis for both higherlevel programming and symbolic, explainable AI. This talk will provide an overview of Prolog's evolution, status, and future. It will start with a quick tour of the major milestones in the advancement of the language and its implementations, from the original Marseille and Edinburgh versions, to the many current ones, with more appearing continuously. This will be followed by some reflections on issues such as what still makes Prolog different and relevant as a language and tool, the best approaches for teaching Prolog, some landmark applications, relation to other logic programming languages, or the use of Prolog and Prologrelated tools for other programming languages. The talk will also attempt to dispel along the way some common misconceptions about the language, while discussing some past weaknesses and others that may still need addressing. It will conclude with some reflections on challenges and opportunities for the future.
Part of the contents of this talk appear in the recent TPLP paper "50 years of Prolog and Beyond", by Philipp Körner, Michael Leuschel, João Barbosa, Vítor Santos Costa, Verónica Dahl, Manuel V. Hermenegildo, Jose F. Morales, Jan Wielemaker, Daniel Diaz, Salvador Abreu, and Giovanni Ciatto, written for Prolog's 50th anniversary and TPLP's 20th anniversary. See prologyear.logicprogramming.org for pointers to this paper and other initiatives related to the Year of Prolog. The Year of Prolog is organized by the Association for Logic Programming and the Prolog Heritage foundation.
09:00  Rediscovering Argumentation Principles Utilizing Collective Attacks PRESENTER: Matthias König ABSTRACT. Argumentation Frameworks (AFs) are a key formalism in AI research. Their semantics have been investigated in terms of principles, which define characteristic properties in order to deliver guidance for analysing established and developing new semantics. Because of the simple structure of AFs, many desired properties hold almost trivially, at the same time hiding interesting concepts behind syntactic notions. We extend the principlebased approach to Argumentation Frameworks with Collective Attacks (SETAFs) and provide a comprehensive overview of common principles for their semantics. Our analysis shows that investigating principles based on decomposing the given SETAF (e.g. directionality or SCCrecursiveness) poses additional challenges in comparison to usual AFs. We introduce the notion of the reduct as well as the modularization principle for SETAFs which will prove beneficial for this kind of investigation. We then demonstrate how our findings can be utilized for incremental computation of extensions and give a novel parameterized tractability result for verifying preferred extensions. 
09:30  A Credal Least Undefined Stable Semantics for Probabilistic Logic Programs and Probabilistic Argumentation PRESENTER: Fabio Cozman ABSTRACT. We present an approach to probabilistic logic programming and probabilistic argumentation that combines elements of the Lstable semantics and the credal semantics. We derive the complexity of inferences, propose an extended version of argumentation graphs with a semantics that maps to the L stable semantics, and introduce a definition for the probability of an argument. 
10:00  Computing Stable Argumentative Conclusions under the WeakestLink Principle in the ASPIC+ Framework PRESENTER: Tuomo Lehtonen ABSTRACT. The study of computationally hard reasoning tasks in different argumentation formalisms is a topical research direction in KR. Here we focus on ASPIC+ as a central structured argumentation formalism. It has recently been shown that rephrasing argumentation semantics in terms of subsets of defeasible elements allows for gaining new insights for reasoning about acceptance in established fragments of ASPIC$^+$. In this paper we provide a nontrivial generalization of these recent results, capturing preferences as a major ingredient of ASPIC+. In particular, considering preferences under the weakestlink principle, we show that the stable semantics can be phrased in terms of subsets of defeasible elements. We employ the rephrasing for establishing both complexity results and practical algorithms for reasoning about acceptance in this variant of ASPIC+. Justified by completeness for the second level of the polynomial hierarchy, we develop an iterative answer set solving based approach to reasoning about acceptance under the socalled elitist lifting in ASPIC$^+$ frameworks. We also show that an implementation of the approach scales well in practice. 
09:00  On Syntactic Forgetting with Strong Persistence ABSTRACT. It is generally agreed upon that socalled strong persistence (SP) captures best the essence of forgetting in logic programming. While classes of operators, such as FR and FSP, that satisfy immediate relaxations of (SP), and in the case of FSP, even (SP) whenever possible, have been characterized semantically, many practical questions have yet to be addressed. This technical paper aims to answer one of them: How can atoms be forgotten from a program without having to calculate its exponential number of models. To this end, we introduce two concrete representatives of FR and FSP that forget sets of atoms by syntactical manipulation of a program’s rules. This may in many cases prevent exponential blowups and produce a forgetting result that is close to the original program. 
09:30  Kernel Contraction and the Order of Relevance ABSTRACT. The postulate of relevance provides a suitable and general notion of minimal change for belief contraction. Relevance is tightly connected to smooth kernel contractions when an agent's epistemic state is represented as a logically closed set of formulae. This connection, however, breaks down when an agent's epistemic state is represented as a set of formulae not necessarily logically closed. We investigate the cause behind this rupture, and we reconnect relevance with smooth kernel contractions by constraining the behaviour of their choice mechanisms and epistemic preference relations. Our first representation theorem connects smooth kernel contractions with a novel class of epistemic preference relations. For our second representation theorem, we introduce the principle of symmetry of removal that relates relevance to epistemic choices. For the last theorem, we devise a novel class of smooth kernel contractions, satisfying relevance, which are based on epistemic preference relations that capture the principle of symmetry of removal. 
10:00  PRESENTER: Joe Singleton ABSTRACT. Consider the following belief change/merging scenario. A group of information sources give a sequence of reports about the state of the world at various instances (e.g. different points in time). The true states at these instances are unknown to us. The sources have varying levels of expertise, also unknown to us, and may be knowledgeable on some topics but not others. This may cause sources to report false statements in areas they lack expertise. What should we believe on the basis of these reports? We provide a framework in which to explore this problem, based on an extension of propositional logic with expertise formulas. This extended language allows us to express beliefs about the state of the world at each instance, as well as beliefs about the expertise of each source. We propose several postulates, provide a couple of families of concrete operators, and analyse these operators with respect to the postulates. 
"Linear Logic and Proof Systems": 6 papers (12 min presentation + 23 min Q&A)
09:00  LinearAlgebraic Models of Linear Logic as Categories of Modules over SigmaSemirings PRESENTER: Takeshi Tsukada ABSTRACT. A number of models of linear logic are based on or closely related to linear algebra, in the sense that morphisms are ``matrices'' over appropriate coefficient sets. Examples include models based on coherence spaces, finiteness spaces and probabilistic coherence spaces, as well as the relational and weighted relational models. This paper introduces a unified framework based on module theory, making the linear algebraic aspect of the above models more explicit. Specifically we consider modules over \emph{\( \Sigma \)semirings} \( R \), which are ringlike structures with partiallydefined countable sums, and show that morphisms in the above models are actually \( R \)linear map in the standard algebraic sense for appropriate \( R \). An advantage of our algebraic treatment is that the category of \( R \)modules is locally presentable, from which it easily follows that this category becomes a model of intuitionistic linear logic with the cofree exponential. We then discuss constructions of classical models and show that the abovementioned models are examples of our constructions. 
09:15  Bouncing threads for circular and nonwellfounded proofs PRESENTER: David Baelde ABSTRACT. Given that (co)inductive types are naturally modelled as fixed points, it is unsurprising that fixedpoint logics are of interest in the study of programming languages, via the CurryHoward (or proofsasprograms) correspondence. This motivates investigations of the structural prooftheory of fixedpoint logics and of their cutelimination procedures. Among the various approaches to proofs in fixedpoint logics, circular  or cyclic  proofs, are of interest in this regard but suffer from a number of limitations, most notably from a quite restricted use of cuts. Indeed, the validity condition which ensures soundness of nonwellfounded derivations and productivity of their cuteliminitation prevents some computationallyrelevant patterns of cuts. As a result, traditional circular proofs cannot serve as a basis for a theory of (co)recursive programming by lack of compositionality: there are not enough circular proofs and they compose badly. The present paper addresses some of these limitations by developing the circular and nonwellfounded prooftheory of multiplicative additive linear logic with fixed points (muMALL) beyond the scope of the seminal works of Santocanale and Fortier and of Baelde et al. We make the following contributions: (1) We define bouncingvalidity: a new, generalized, validity criterion for muMALL, which takes axioms and cuts into account. (2) We show soundness and cut elimination theorems for bouncingvalid nonwellfounded proofs. As a result, even though bouncingvalidity proves the same sequents (or judgments) as before but we have many more valid proofs at our disposal. (3) We illustrate the relevance of bouncingvalidity on a number of examples motivated by the CurryHoward correspondence. (4) Finally, we study the decidability of the criterion in the circular case: we prove it is undecidable in general. (5) However, we identify a hierarchy of decidable subcriteria, such that any bouncingvalid circular proof is validated by some criterion of the hierarchy. 
09:30  On the strength of SheraliAdams and Nullstellensatz as propositional proof systems PRESENTER: Ilario Bonacina ABSTRACT. We characterize the strength of the algebraic proof systems SheraliAdams (SA) and Nullstellensatz (NS) in terms of Fregestyle proof systems. Unlike boundeddepth Frege, SA has polynomialsize proofs of the pigeonhole principle (PHP). A natural question is whether adding PHP to boundeddepth Frege is enough to simulate SA. We show that SA with unary integer coefficients lies strictly between treelike depth1 Frege + PHP and treelike Resolution. We introduce a weighted version of PHP (wPHP) and we show that SA with integer coefficients lies strictly between treelike depth1 Frege + wPHP and Resolution. Analogue results are shown for NS using the bijective (i.e. onto and functional) pigeonhole principle and a weighted version of it. 
09:45  ABSTRACT. The language of Algebraic Geometry combines two complementary and dependent levels of discourse: on the geometric side, schemes define spaces of the same cohesive nature as manifolds ; on the vectorial side, every scheme X comes equipped with a symmetric monoidal category of quasicoherent modules, which may be seen as generalised vector bundles on the scheme X. In this paper, we use the functor of points approach to Algebraic Geometry developed by Grothendieck in the 1970s to establish that every covariant presheaf X on the category of commutative rings  and in particular every scheme X  comes equipped ``above it'' with a symmetric monoidal closed category Mod_X of presheaves of modules. This category Mod_X defines moreover a model of intuitionistic linear logic, whose exponential modality is obtained by glueing together in an appropriate way the Sweedler dual construction on ring algebras. The purpose of this work is to establish on firm mathematical foundations the idea that linear logic should be understood as a logic of generalised vector bundles, in the same way as dependent type theory is understood today as a logic of spaces up to homotopy. 
10:00  PRESENTER: Francesco Dagnino ABSTRACT. In quantitative reasoning one compares objects by distances, instead of equivalence relations, so that one can measure how much they are similar, rather than just saying whether they are equivalent or not. In this paper we aim at providing a solid logical ground to quantitative reasoning with distances, using the categorical language of Lawvere's hyperdoctrines. The key idea is to see distances as equality predicates in Linear Logic. We use graded modalitiess to write a resource sensitive substitution rule for equality, which allows us to give it a quantitative meaning by distances. We introduce a deductive calculus for (Graded) Linear Logic with quantitative equality and the notion of Lipschitz doctrine to define a sound and complete categorical semantics of it. We also describe a universal construction of Lipschitz doctrines, which generates examples based for instance on metric spaces and quantitative realisability. 
10:15  Exponentials as Substitutions and the Cost of Cut Elimination in Linear Logic ABSTRACT. This paper introduces the exponential substitution calculus (ESC), a new presentation of cut elimination for IMELL, based on proof terms and building on the idea that exponentials can be seen as explicit substitutions. The idea in itself is not new, but here it is pushed to a new level, inspired by Accattoli and Kesner's linear substitution calculus (LSC). One of the key properties of the LSC is that it naturally models the subterm property of abstract machines, that is the key ingredient for the study of reasonable time cost models for the lambdacalculus. The new ESC is then used to design a cut elimination strategy with the subterm property, providing the first polynomial cost model for cut elimination with unconstrained exponentials. For the ESC, we also prove untyped confluence and typed strong normalization, showing that it is an alternative to proof nets for an advanced study of cut elimination. 
25years of SAT
10:00  Efficient Datalog Rewriting for Query Answering in TGD Ontologies PRESENTER: Zhe Wang ABSTRACT. Tuplegenerating dependencies (TGDs or existential rules) are an expressive constraint language for ontologymediated query answering and thus query answering is of high complexity. Existing systems based on firstorder rewriting methods can lead to queries too large for DBMS to handle. It is shown that datalog rewriting can result in more compact queries, yet previously proposed datalog rewriting methods are mostly inefficient for implementation. In this paper, we fill the gap by proposing an efficient datalog rewriting approach for answering conjunctive queries over TGDs, and identify and combine existing fragments of TGDs for which our rewriting method terminates. We implemented a prototype system Drewer, and experiments show that it is able to handle a wide range of benchmarks in the literature. Moreover, Drewer shows superior performance over stateoftheart systems on both the compactness of rewriting and the efficiency of query answering. 
10:15  PRESENTER: Mohan Sridharan ABSTRACT. The architecture described in this paper encodes a theory of intentions based on the key principles of nonprocrastination, persistence, and automatically limiting reasoning to relevant knowledge and observations. The architecture reasons with transition diagrams of any given domain at two different resolutions, with the fineresolution description defined as a refinement of, and hence tightlycoupled to, a coarseresolution description. For any given goal, nonmonotonic logical reasoning with the coarseresolution description computes an activity, i.e., a plan, comprising a sequence of abstract actions to be executed to achieve the goal. Each abstract action is implemented as a sequence of concrete actions by automatically zooming to and reasoning with the part of the fineresolution transition diagram relevant to the current coarseresolution transition and the goal. Each concrete action in this sequence is executed using probabilistic models of the uncertainty in sensing and actuation, and the corresponding fineresolution outcomes are used to infer coarseresolution observations that are added to the coarseresolution history. The architecture’s capabilities are evaluated in the context of a simulated robot assisting humans in an office domain, on a physical robot (Baxter) manipulating tabletop objects, and on a wheeled robot (Turtlebot) moving objects to particular places or people. The experimental results indicate improvements in reliability and computational efficiency compared with an architecture that does not include the theory of intentions, and an architecture that does not include zooming for fineresolution reasoning. 
11:00  PRESENTER: Rebecca Gentzel ABSTRACT. Haddock, introduced in [11], is a declarative language and architecture for the specification and the implementation of Multivalued decision diagrams. It relies on a labeled transition system to specify and compose individual constraints into a propagator with filtering capabilities that automatically deliver the expected level of filtering. Yet, the operational potency of the filtering algorithms strongly correlate with heuristics for carrying out refinements of the diagrams. This paper considers how to empower Haddock users with the ability to unintrusively specify various such heuristics and derive the computational benefits of exerting finegrained control over the refinement process. 
11:30  CNF Encodings of Binary Constraint Trees PRESENTER: Ruiwei Wang ABSTRACT. Ordered Multivalued Decision Diagrams (MDDs) have been shown to be useful to represent finite domain functions/relations. For example, various constraints can be modelled with MDD constraints. Recently, a new representation called Binary Constraint Tree (BCT), which is a (special) tree structure binary Constraint Satisfaction Problem, has been proposed to encode MDDs and shown to outperform existing MDD constraint propagators in Constraint Programming solvers. BCT is a compact representation, and it can be exponentially smaller than MDD for representing some constraints. Here, we also show that BCT is compact for representing nondeterministic finite state automaton (NFA) constraints. In this paper, we investigate how to encode BCT into CNF form, making it suitable for SAT solvers. We present and investigate five BCT CNF encodings. We compare the propagation strength of the BCT CNF encodings and experimentally evaluate the encodings on a range of existing benchmarks. We also compare with seven existing CNF encodings of MDD constraints. Experimental results show that the CNF encodings of BCT constraints can outperform those of MDD constraints on various benchmarks. 
12:00  PRESENTER: Nicolas Schmidt ABSTRACT. In order to reduce the size of compiled forms in knowledge compilation, we propose a new approach based on a splitting of the main representation into a nucleus representation and satellites representations. Nucleus representation is the projection of the original representation onto the ``main'' variables and satellite representations define the other variables according to the nucleus. We propose a language and a method, aimed at OBDD/OMDD representations, to compile into this split form. Our experimental study shows major size reductions on configuration and diagnosis oriented benchmarks. 
11:00  Learning Constraint Programming Models from Data using GenerateandAggregate PRESENTER: Samuel Kolb ABSTRACT. Constraint programming (CP) is used widely for solving realworld problems. However, designing these models require substantial expertise. In this paper, we tackle this problem by synthesising models automatically from past solutions. We introduce COUNTCP, which uses simple grammars and a generateandaggregate approach to learn expressive firstorder constraints typically used in CP as well as their parameters from data. The learned constraints generalise across instances over different sizes and can be used to solve unseen instances  e.g., learning constraints from a 4 X 4 Sudoku to solve a 9 X 9 Sudoku or learning nurse staffing requirements across hospitals. COUNTCP is implemented using the CPMpy constraint programming and modelling environment to produce constraints with nested mathematical expressions. The method is empirically evaluated on a set of suitable benchmark problems and shows to learn accurate and compact models quickly. 
11:30  Constraint Acquisition Based on Solution Counting PRESENTER: Christopher Coulombe ABSTRACT. We propose CABSC, a system that performs Constraint Acquisition Based on Solution Counting. In order to learn a Constraint Satisfaction Problem (CSP), the user provides positive examples and a MetaCSP, i.e. a model of a combinatorial problem whose solution is a CSP. This MetaCSP allows listing the potential constraints that can be part of the CSP the user wants to learn. It also allows stating the parameters of the constraints, such as the coefficients of a linear equation, and imposing constraints over these parameters. The CABSC reads the MetaCSP using an augmented version of the language MiniZinc and returns the CSP that accepts the fewest solutions among the CSPs accepting all positive examples. This is done using a branch and bound where the bounding mechanism makes use of a model counter. Experiments show that CABSC is successful at learning constraints and their parameters from positive examples. 
12:00  PRESENTER: Jovial CheukamNgouonou ABSTRACT. To automate the discovery of conjectures on combinatorial objects, we introduce the concept of a map of sharp bounds on characteristics of combinatorial objects, that provides a set of interrelated sharp bounds for these combinatorial objects. We then describe a Bound Seeker, a CPbased system, that gradually acquires maps of conjectures. The system was tested for searching conjectures on bounds on characteristics of digraphs: it constructs sixteen maps involving 431 conjectures on sharp lower and upperbounds on eight digraph characteristics. 
11:00  A stratified approach to Lob induction PRESENTER: Daniel Gratzer ABSTRACT. Guarded type theory extends type theory with a handful of modalities and constants to encode productive recursion. While these theories have seen widespread use, the metatheory of guarded type theories, particularly guarded dependent type theories remains underdeveloped. We show that integrating Lob induction is the key obstruction to unifying guarded recursion and dependence in a wellbehaved type theory and prove a nogo theorem sharply bounding such type theories. Based on these results, we introduce GTT: a stratified guarded type theory. GTT is properly two type theories, static GTT and dynamic GTT. The former contains only propositional rules governing Lob induction but enjoys decidable typechecking while the latter extends the former with definitional equalities. Accordingly, dynamic GTT does not have decidable typechecking. We prove, however, a novel guarded canonicity theorem for dynamic GTT, showing that programs in dynamic GTT can be run. These two type theories work in concert, with users writing programs in static GTT and running them in dynamic GTT. 
11:30  Encoding type universes without using matching modulo associativity and commutativity ABSTRACT. The encoding of proof systems and type theories in logical frameworks is key to allow the translation of proofs from one system to the other. The λΠcalculus modulo rewriting is a powerful logical framework in which various systems have already been encoded, including type systems with an infinite hierarchy of type universes equipped with a unary successor operator and a binary max operator: Matita, Coq, Agda and Lean. However, to decide the word problem in this maxsuccessor algebra, all the encodings proposed so far use rewriting with matching modulo associativity and commutativity (AC), which is of high complexity and difficult to integrate in usual algorithms for βreduction and typechecking. In this paper, we show that we do not need matching modulo AC by enforcing terms to be in some special canonical form wrt associativity and commutativity, and by using rewriting rules taking advantage of this canonical form. This work has been implemented in the proof assistant Lambdapi. 
12:00  Adequate and computational encodings in the logical framework Dedukti ABSTRACT. Dedukti is a very expressive logical framework which unlike most frameworks, such as the Edinburgh Logical Framework (ELF), allows for the representation of computation alongside deduction. However, unlike ELF encodings, Dedukti encodings proposed until now do not feature an adequacy theorem  i.e., a bijection between terms in the encoded system and in its encoding. Moreover, many of them also do not have a conservativity one, which compromises the ability of Dedukti to check proofs written in such encodings. We propose a different approach for Dedukti encodings which do not only allow for simpler conservativity proofs, but which also restore the adequacy of encodings. More precisely, we propose in this work adequate (and thus conservative) encodings for Functional Pure Type Systems. However, in contrast with ELF encodings, ours is computational  that is, represents computation directly as computation. Therefore, our work is the first to present and prove correct an approach allowing for encodings that are both adequate and computational in Dedukti. 
11:00  ABSTRACT. Logic programming is a flexible programming paradigm due to use of predicates without a fixed data flow. To extend logic languages with the compact notation of functional programming, there are various proposals to map evaluable functions into predicates in order to stay in the logic programming framework. Since amalgamated functional logic languages offer flexible as well as efficient evaluation strategies, we propose an opposite approach in this paper. By mapping logic programs into functional logic programs with a transformation based on inferring functional dependencies, we develop a fully automatic transformation which keeps the flexibility of logic programming but can improve computations by reducing infinite search spaces to finite ones. 
11:22  PRESENTER: Stefano Sferrazza ABSTRACT. Modern applications combine information from a great variety of sources. Oftentimes, some of these sources, like MachineLearning systems, are not strictly binary but associated with some degree of (lack of) confidence in the observation. We propose MVDatalog and MVDatalog± as extensions of Datalog and Datalog±, respectively, to the fuzzy semantics of infinitevalued Lukasiewicz logic L as languages for effectively reasoning in scenarios where such uncertain observations occur. We show that the semantics of MVDatalog exhibits similar modeltheoretic properties as Datalog. in particular, we show that (fuzzy) entailment can be defined in terms of an analogue of minimal models and give a characterisation, and proof of the uniqueness of such minimal models. On the basis of this characterisation, we propose similar manyvalued semantics for rules with existential quantification in the head, extending Datalog±. 
11:44  Jumping Evaluation of Nested Regular Path Queries PRESENTER: Joachim Niehren ABSTRACT. The propositional dynamic logic is a fundamental language that provides nested regular path queries for datagraphs, as needed for querying graph databases and RDF triple stores. We pro pose a new algorithm for evaluating nested regular path queries. Not only does it evaluate path queries on datagraphs from a set of start nodes in combined linear time, but also this complex ity bound depends only on the size of the query’s topdown needed subgraph, a notion that we introduce formally. For many queries relevant in practice, the topdown neeeded subgraph is way smaller than the whole datagraph. Our algorithm is based on a new compilation schema from nested regular path queries to monadic datalog queries that we introduce. We prove that the topdown evaluation of the datalog program visits only the topdown needed subgraph for the path query. Thereby, the combined linear time complexity depending on the size of the topdown needed subgraph is implied by a general complexity result for topdown datalog evaluation (Tekle and Liu 2010). As an application, we show that our algorithm permits to reformulate in simple terms a variant of a very efficient automatabased algorithm proposed by Maneth and Nguyen that evaluates navigational path queries in datatrees based on indexes and jumping. Moreover, our variant overcomes some limitations of Maneth and Nguyen’s: it is not bound to trees and applies to graphs; it is not limited to forward navigational XPath but can treat any nested regular path query and it can be implemented efficiently without any specialized or dedicated techniques, by simply using any efficient datalog evaluator. 
12:06  FOLDRM: A Scalable and Efficient Inductive Learning Algorithm for MultiCategory Classification of Mixed Data PRESENTER: Gopal Gupta ABSTRACT. FOLDRM is an automated inductive learning algorithm for learning default rules for mixed (numerical and categorical) data. It generates an (explainable) answer set programming (ASP) rule set for multicategory classification tasks while maintaining efficiency and scalability. The FOLDRM algorithm is competitive in performance with the widelyused XGBoost algorithm, however, unlike XGBoost, the FOLDRM algorithm produces an explainable model. FOLDRM outperforms XGBoost on some datasets, particularly large ones. FOLDRM also provides humanfriendly explanations for predictions as well as natural language translation of the default rules learned. 
"Graphs, Behavioural Equivalences and Learning": 6 papers (12 min presentation + 23 min Q&A)
11:00  Treelike decompositions for transductions of sparse graphs PRESENTER: Sandra Kiefer ABSTRACT. We give new decomposition theorems for classes of graphs that can be transduced in firstorder logic from classes of sparse graphs  more precisely, from classes of bounded expansion and from nowhere dense classes. In both cases, the decomposition takes the form of a single colored rooted tree of bounded depth where, in addition, there can be links between nodes that are not related in the tree. The constraint is that the structure formed by the tree and the links has to be sparse. Using the decomposition theorem for transductions of nowhere dense classes, we show that they admit lowshrubdepth covers of size O(n^ε), where n is the vertex count and ε>0 is any fixed real. This solves an open problem posed by Gajarský et al. (ACM TOCL '20) and also by Briański et al. (SIDMA '21). 
11:15  Stable graphs of bounded twinwidth PRESENTER: Jakub Gajarský ABSTRACT. We prove that every class of graphs C that is monadically stable and has bounded twinwidth can be transduced from some class with bounded sparse twinwidth. This generalizes analogous results for classes of bounded linear cliquewidth [Nešetřil et al., SODA 2020] and of bounded cliquewidth [Nešetřil et al., SODA 2021]. It also implies that monadically stable classes of bounded twinwidth are linearly chibounded. 
11:30  Model Checking on Interpretations of Classes of Bounded Local Cliquewidth PRESENTER: Nikolas Mählmann ABSTRACT. An interpretation is an operation that maps an input graph to an output graph by redefinig its edge relation using a firstorder formula. This rich framework includes operations such as taking the complement or a fixed power of a graph as (very) special cases. We prove that there is an FPT algorithm for the firstorder model checking problem on classes of graphs which are interpretable in classes of graphs with bounded local cliquewidth. Notably, this includes interpretations of planar graphs and classes of bounded genus in general. To obtain this result we develop a new tool which works in a very general setting of NIP classes and which we believe can be an important ingredient in obtaining similar results in the future. 
11:45  PRESENTER: Johannes Marti ABSTRACT. Algorithms for solving computational problems related to the modal mucalculus generally do not take the formulas themselves as input, but operate on some kind of representation of formulas. This representation is usually based on a graph structure that one may associate with a mucalculus formula. Recent work by Kupke, Marti and Venema showed that the operation of renaming bound variables may incur an exponential blowup of the size of such a graph representation. Their example revealed the undesirable situation that standard constructions, on which algorithms for model checking and satisfiability depend, are sensitive to the specific choice of bound variables used in a formula. Our work discusses how the notion of alphabetic equivalence interacts with the construction of graph representations of mucalculus formulas, and with the induced size measures of formulas. We introduce the condition of alphainvariance on such constructions, requiring that alphabetically equivalent formulas are given the same (or isomorphic) graph representations. Our main results are the following. First we show that if two mucalculus formulas are alphaequivalent, then their respective FischerLadner closures have the same cardinality, up to alphaequivalence. We then continue with the definition of an alphainvariant construction which represents an arbitrary mucalculus formula by a graph that has exactly the size of the quotient of the closure of the formula, up to alphaequivalence. This definition, which is itself based on a renaming of variables, solves the abovementioned problem discovered by Kupke et al. 
12:00  Milner's Proof System for Regular Expressions Modulo Bisimilarity is Complete ABSTRACT. Milner (1984) defined a process semantics for regular expressions. He formulated a sound proof system for bisimilarity of process interpretations of regular expressions, and asked whether this system is complete. We report conceptually on a proof that establishes that Milner's system is complete, by motivating and describing all of its main steps. We substantially refine the completeness proof by Grabmayer and Fokkink (2020) for the restriction of Milner's system to `1free' regular expressions. As a crucial complication we recognize that process graphs with emptystep transitions that satisfy the layered loopexistence/elimination property LLEE are not closed under bisimulation collapse (unlike process graphs with LLEE that only have properstep transitions). We circumnavigate this obstacle by defining a LLEEpreserving `crystallization procedure' for such process graphs. By that we obtain `nearcollapsed' process graphs with LLEE whose strongly connected components are either collapsed or of `twincrystal' shape. Such nearcollapsed process graphs guarantee provable solutions for bisimulation collapses of process interpretations of regular expressions. 
12:15  Computable PAC Learning of Continuous Features PRESENTER: Cameron Freer ABSTRACT. We introduce definitions of computable PAC learning for binary classification over computable metric spaces. We provide sufficient conditions on a hypothesis class to ensure than an empirical risk minimizer (ERM) be computable, and bound the strong Weihrauch degree of an ERM under more general conditions. We also give a presentation of a hypothesis class that does not admit any proper computable PAC learner with computable sample function, despite the underlying class being PAC learnable. 
11:00  25 years of SAT: Modern SAT Techniques / Remembering Hans van Maaren ABSTRACT. The talk will have two parts: 1. Modern SAT solvers combine rewriting techniques with conflictdriven clause learning (CDCL) search to maximize performance. Rewriting techniques improve the encoding of problems and they can further simplify formulas after CDCL search found useful clauses. The most effective rewriting technique is bounded variable elimination combined with subsumption. Together they reduce the size of industrial problems substantially. Other powerful rewriting techniques are shrinking clauses (vivification) and eliminating redundant clauses. Each rewriting technique can enable simplifications by other rewriting techniques. The order in which they are applied impacts the effectiveness. Novel value selection heuristics are also key to modern SAT solving. Assigning decision variables to the last implied value facilitates rapid restarts to improve the quality of learning while staying in a similar part of the search space. Additionally, the search is guided toward a solution using assignments obtained by local search techniques. These heuristic improvements boosted modern SAT solver performance on satisfiable formulas. 2. Remembering Hans van Maaren 
11:30  On the performance of deep generative models of realistic SAT instances PRESENTER: Jesús GiráldezCru ABSTRACT. Generating realistic random SAT instances random SAT formulas with computational characteristics similar to the ones of application SAT benchmarks is a challenging problem in order to understand the success of modern SAT solvers solving this kind of problems. Traditional approaches are based on probabilistic models, where a probability distribution characterizes the occurrences of variables into clauses in order to mimic a certain feature exhibited in most application formulas (e.g., community structure), but they may be unable to reproduce others. Alternatively, deep generative models have been recently proposed to address this problem. The goal of these models is to learn the whole structure of the formula without focusing on any predefined feature, in order to reproduce all its computational characteristics at once. In this work, we propose two new deep generative models of realistic SAT instances, and carry out an exhaustive experimental evaluation of these and other existing models in order to analyze their performances. Our results show that models based on graph convolutional networks, possibly enhanced with edge features, return the best results in terms of structural properties and SAT solver performance. 
12:00  PRESENTER: Mikolas Janota ABSTRACT. This paper applies machine learning (ML) to solve quantified SMT problems more efficiently. The motivating idea is that the solver should learn from already solved formulas to solve new ones. This is especially relevant in classes of similar formulas. We focus on the enumerative instantiationa wellestablished approach to solving quantified formulas anchored in the Herbrand's theorem. The task is to select the right ground terms to be instantiated. In ML parlance, this means learning to rank ground terms. We devise a series of features of the considered terms and train on them using boosted decision trees. In particular, we integrate the LightGBM library into the SMT solver CVC5. The experimental results demonstrate that MLguided enables us to solve more formulas and reduce the number of quantifier instantiations. 
12:00  Automating Reasoning with Standpoint Logic via Nested Sequents PRESENTER: Lucía Gómez Álvarez ABSTRACT. Standpoint logic is a recently proposed formalism in the context of knowledge integration, which advocates a multiperspective approach permitting reasoning with a selection of diverse and possibly conflicting standpoints rather than forcing their unification. In this paper, we introduce nested sequent calculi for propositional standpoint logicsproof systems that manipulate trees whose nodes are multisets of formulaeand show how to automate standpoint reasoning by means of nondeterministic proofsearch algorithms. To obtain worstcase complexityoptimal proofsearch, we introduce a novel technique in the context of nested sequents, referred to as "coloring," which consists of taking a standpoint formula as input, guessing a certain coloring of its subformulae, and then running proofsearch in a nested sequent calculus on the colored input. Our technique lets us decide the validity of standpoint formulae in CoNP since proofsearch only produces a "partial" proof relative to each permitted coloring of the input. We show how all partial proofs can be fused together to construct a complete proof when the input is valid, and how certain partial proofs can be transformed into a countermodel when the input is invalid. These "certificates" (i.e. proofs and countermodels) serve as explanations of the (in)validity of the input. 
12:00  Private and public affairs in strategic reasoning PRESENTER: Aniello Murano ABSTRACT. Do agents know each others’ strategies? In multiprocess software construction, each process has access to the processes already constructed; but in typical humanrobot interactions, a human may not announce its strategy to the robot (indeed, the human may not even know their own strategy). This question has often been overlooked when modeling and reasoning about multiagent systems. Recently, the impact of this distinction on epistemic reasoning was studied. In this work, we study how it impacts strategic reasoning. To do so we consider Strategy Logic (SL), a wellestablished and highly expressive logic for strategic reasoning. Its usual semantics, which we call “whitebox semantics”, models systems in which agents “broadcast” their strategies. By adding imperfect information to the evaluation games for the usual semantics, we obtain a new semantics called “blackbox semantics”, in which agents keep their strategies private. We consider the modelchecking problem and show that the blackbox semantics has much lower complexity than whitebox semantics for an important fragment of Strategy Logic. 
Lunch will be held in Taub lobby (CP, LICS, ICLP) and in The Grand Water Research Institute (KR, FSCD, SAT).
14:0015:00: Tutorial
15:0015:30: Benchmarking
14:00  Formal Explainable AI ABSTRACT. Explainable artificial intelligence (XAI) represents arguably one of the most crucial challenges being faced by the area of AI these days. Although the majority of approaches to XAI are of heuristic nature, recent work proposed the use of abductive reasoning to computing provably correct explanations for machine learning (ML) predictions. The proposed rigorous approach was shown to be useful not only for computing trustable explanations but also for reasoning about explanations computed heuristically. It was also applied to uncover a close relationship between XAI and verification of ML models. This talk will overview the advances of the formal approach to XAI as well as the use of reasoning in devising interpretable rulebased ML models including decision trees, decision sets, and decision lists. 
15:00  PRESENTER: Nguyen Dang ABSTRACT. Benchmarking is an important tool for assessing the relative performance of alternative solving approaches. However, the utility of benchmarking is limited by the quantity and quality of the available problem instances. Modern constraint programming languages typically allow the specification of a classlevel model that is parameterised over instance data. This separation presents an opportunity for automated approaches to generate instance data that define instances that are graded (solvable at a certain difficulty level for a solver) or can discriminate between two solving approaches. In this paper, we introduce a framework that combines these two properties to generate a large number of benchmark instances, purposely generated for effective and informative benchmarking. We use five problem classes that were used in the MiniZinc competition to demonstrate the usage of our framework. In addition to producing a ranking among solvers, our framework gives a broader understanding of the behaviour of each solver for the whole instance space; for example by finding subsets of instances where the solver performance significantly varies from its average performance. 
14:0015:00: Planning
15:0015:30: DCOP
14:00  Isomorphisms between STRIPS problems and subproblems PRESENTER: Arnaud Lequen ABSTRACT. Determining whether two STRIPS planning instances are isomorphic is the simplest form of comparison between planning instances. It is also a particular case of the problem concerned with finding an isomorphism between a planning instance P and a subinstance of another instance P'. One application of such an isomorphism is to efficiently produce a compiled form containing all solutions to P from a compiled form containing all solutions to P'. In this paper, we study the complexity of both problems. We show that the former is GIcomplete, and can thus be solved, in theory, in quasipolynomial time. While we prove the latter to be NPcomplete, we propose an algorithm to build an isomorphism, when possible. We report extensive experimental trials on benchmark problems which demonstrate conclusively that applying constraint propagation in preprocessing can greatly improve the efficiency of a SAT solver. 
14:30  PRESENTER: Joan Espasa Arxer ABSTRACT. In this work we focus on a planning problem based on Plotting, a tilematching puzzle video game published by Taito in 1989. The objective of the game is to remove at least a certain number of coloured blocks from a grid by sequentially shooting blocks into the same grid. The interest and the difficulty of Plotting is due to the complex transitions after every shot: various blocks are affected directly, while others can be indirectly affected by gravity. We highlight the difficulties and inefficiencies of modelling and solving Plotting using PDDL, the defacto standard language for AI planners. We also provide two constraint models that are able to capture the inherent complexities of the problem. In addition, we provide a set of benchmark instances, an instance generator and an extensive experimental comparison demonstrating solving performance with SAT, CP, MIP and a stateoftheart AI planner. 
15:00  Completeness Matters: Towards Efficient Caching in Treebased Synchronous Backtracking Search for DCOPs PRESENTER: Jie Wang ABSTRACT. Treebased backtracking search is an important technique to solve Distributed Constraint optimization Problems (DCOPs), where agents cooperatively exhaust the search space by branching on each variable to divide subproblems and reporting the results to their parent after solving each subproblem. Therefore, effectively reusing the historical search results can avoid unnecessary resolutions and substantially reduce the overall overhead. However, the existing caching schemes for asynchronous algorithms cannot be applied directly to synchronous ones, in the sense that child agent reports the lower and upper bound rather than the precise cost of exploration. In addition, the existing caching scheme for synchronous algorithms has the shortcomings of incompleteness and low cache utilization. Therefore, we propose a new caching scheme for treebased synchronous backtracking search, named Retention Scheme (RS). It utilizes the upper bounds of subproblems which avoid the reuse of suboptimal solutions to ensure the completeness, and deploys a finegrained cache information unit targeted at each child agent to improve the cachehit rate. Furthermore, we introduce two new cache replacement schemes to further improve performance when the memory is limited. Finally, we theoretically prove the completeness of our method and empirically show its superiority. 
14:00  A Methodology for Designing Proof Search Calculi for NonClassical Logics 
When solving combinatorial problems, pruning symmetric solution candidates from the search space is essential. Most of the existing approaches are instancespecific and focus on the automatic computation of Symmetry Breaking Constraints (SBCs) for each given problem instance. However, the application of such approaches to largescale instances or advanced problem encodings might be problematic since the computed SBCs are propositional and, therefore, can neither be meaningfully interpreted nor transferred to other instances. As a result, a timeconsuming recomputation of SBCs must be done before every invocation of a solver.
To overcome these limitations, we introduce a new modeloriented approach for Answer Set Programming that lifts the SBCs of small problem instances into a set of interpretable firstorder constraints using a form of machine learning called Inductive Logic Programming. After targeting simple combinatorial problems, we aim to extend our method to be applied also for advanced decision and optimization problems.
14:00  PRESENTER: Leonardo Lamanna ABSTRACT. To effectively use an abstract (PDDL) planning domain to achieve goals in an unknown environment, an agent must instantiate such a domain with the objects of the environment and their properties. If the agent has an egocentric and partial view of the environment, it needs to act, sense, and abstract the perceived data in the planning domain. Furthermore, the agent needs to compile the plans computed by a symbolic planner into low level actions executable by its actuators. This paper proposes a framework that aims to accomplish the aforementioned perspective and allows an agent to perform different tasks. For this purpose, we integrate machine learning models to abstract the sensory data, symbolic planning for goal achievement and path planning for navigation. We evaluate the proposed method in accurate simulated environments, where the sensors are RGBD onboard camera, GPS and compass. 
14:30  PRESENTER: Periklis Mantenoglou ABSTRACT. Temporal specifications, such as those found in multiagent systems, often include cyclic dependencies. Moreover, there is an increasing need to evaluate such specifications in an online manner, upon streaming data. Consider, for example, the online computation of the normative positions of the agents engaging in an ecommerce protocol. We present a formal computational framework that deals with cyclic dependencies in an efficient way. Moreover, we demonstrate the effectiveness of our framework on large synthetic and real data streams, from the fields of multiagent systems and composite event recognition. 
15:00  Symbolic Knowledge Extraction from Opaque Machine Learning Predictors: GridREx & PEDRO PRESENTER: Roberta Calegari ABSTRACT. Procedures aimed at explaining outcomes and behaviour of opaque predictors are becoming more and more essential as machine learning (ML) blackbox (BB) models pervade a wide variety of fields and, in particular, critical ones  e.g., medical or financial , where it is not possible to make decisions on the basis of a blind automatic prediction. A growing number of methods designed to overcome this BB limitation is present in the literature, however some ML tasks are nearly or completely neglectede.g., regression and clustering. Furthermore, existing techniques may be not applicable in complex realworld scenarios or they can affect the output predictions with undesired artefacts. In this paper we present the design and the implementation of GridREx, a pedagogical algorithm to extract knowledge from blackbox regressors, along with PEDRO, an optimisation procedure to automate the GridREx hyperparameter tuning phase with better results than manual tuning. We also report the results of our experiments involving the application of GridREx and PEDRO in real case scenarios, including GridREx performance assessment by using as benchmarks other similar stateoftheart techniques. GridREx proved to be able to give more concise explanations with higher fidelity and predictive capabilities. 
14:00  PRESENTER: Umberto Straccia ABSTRACT. It is wellknown that the triple language RDFS (Resource Description Framework Schema) is designed to represent and reason with positive statements only (e.g., “antipyretics are drugs”). In this paper we show how to extend RDFS to express and reason with various forms of negative statements under the Open World Assumption (OWA). To do so, we start from ρdf, a minimal, but significant RDFS fragment that covers all essential features of RDFS, and then extend it to ρdf⊥¬, allowing express also statements such as “radio therapies are non drug treatments”, “Ebola has no treatment”, or ”opioids and antipyretics are disjoint classes”. The main and, to the best of our knowledge, unique features of our proposal are: (i) ρdf⊥¬ remains syntactically a triple language by extending ρdf with new symbols with specific semantics and there is no need to revert to the reification method to represent negative triples; (ii) the logic is defined in such a way that any RDFS reasoner/store may handle the new predicates as ordinary terms if it does not want to take account of the extra capabilities; (iii) despite negated statements, every ρdf⊥¬ knowledge base is satisfiable; (iv) the ρdf⊥¬ entailment decision procedure is obtained from ρdf via additional inference rules favouring a potential implementation; and (v) deciding entailment in ρdf⊥¬ ranges from P to NP. 
14:30  Hyperintensional Partial Meet Contractions PRESENTER: Renata Wassermann ABSTRACT. Formal frameworks for Epistemology need to have enough logical structure to enable interesting conclusions regarding epistemic phenomena and be expressive enough to model competing positions in the philosophical and logical literature. While beliefs are commonly accepted as hyperintensional attitudes, i.e., epistemic attitudes which may differ even towards necessarily equivalent sentences, most work on standard epistemic logic has relied on idealised and intensional agents. This is particularly true in the area of AGMinspired Belief Change. Although a few recent studies investigate hyperintensional models of belief change, few have been well connected to the AGM framework, the main paradigm in the area. This work investigates hyperintensional notions of belief base contraction and belief set contraction, as studied in the AGM framework, and its connections to partial meet contractions. We also provide suitable representation theorems, characterising the constructions by means of rationality postulates. 
15:00  PRESENTER: Nicolas Schwind ABSTRACT. The seminal characterization of iterated belief revision was proposed by Darwiche and Pearl, which uses an abstract notion of epistemic states. In this work we look for a canonical representation of these epistemic states. Total preorders are not expressive enough to be used as such a canonical representation. Actually, we show that some operators can even not be represented on a countable epistemic space. Nonetheless, under a very reasonable assumption on the epistemic space, we show that OCFs (Ordinal Conditional Functions) can be considered as a canonical representation. 
"FOL, SOL and Model Theory": 6 papers (12 min presentation + 23 min Q&A)
14:00  The amazing mixed polynomial closure and its applications to twovariable firstorder logic ABSTRACT. Polynomial closure is a standard operator which is applied to a class of regular languages. In the paper, we investigate three restrictions called left (LPol), right (RPol) and mixed polynomial closure (MPol). The first two were known while MPol is new. We look at two decision problems that are defined for every class C. Membership takes a regular language as input and asks if it belongs to C. Separation takes two regular languages as input and asks if there exists a third language in C including the first one and disjoint from the second. We prove that LPol, RPol and MPol preserve the decidability of membership under mild hypotheses on the input class, and the decidability of separation under much stronger hypotheses. We apply these results to natural hierarchies. First, we look at several language theoretic hierarchies that are built by applying LPol, RPol and MPol recursively to a single input class. We prove that these hierarchies can actually be defined using almost exclusively MPol. We also consider quantifier alternation hierarchies for twovariable firstorder logic (FO2) and prove that one can climb them using MPol. The result is generic in the sense that it holds for most standard choices of signatures. We use it to prove that for most of these choices, membership is decidable for all levels in the hierarchy. Finally, we prove that separation is decidable for the hierarchy of twovariable firstorder logic equipped with only the linear order (FO2(<)). 
14:15  The Regular Languages of FirstOrder Logic with One Alternation PRESENTER: Corentin Barloy ABSTRACT. The regular languages with a neutral letter expressible in firstorder logic with one alternation are characterized. Specifically, it is shown that if an arbitrary \(\Sigma_2\) formula defines a regular language with a neutral letter, then there is an equivalent \(\Sigma_2\) formula that only uses the order predicate. This shows that the socalled Central Conjecture of Straubing holds for \(\Sigma_2\) over languages with a neutral letter, the first progress on the Conjecture in more than 20 years. To show the characterization, lower bounds against polynomialsize depth3 Boolean circuits with constant top fanin are developed. The heart of the combinatorial argument resides in studying how positions within a language are determined from one another, a technique of independent interest. 
14:30  PRESENTER: Matthias Naaf ABSTRACT. Semiring semantics evaluates logical statements by values in some commutative semiring K. Random semiring interpretations, induced by a probability distribution on K, generalise random structures, and we investigate here the question of how classical results on firstorder logic on random structures, most importantly the 01 laws of Glebskii et al. and Fagin, generalise to semiring semantics. For positive semirings, the classical 01 law implies that every firstorder sentence is, asymptotically, either almost surely evaluated to 0 by random semiring interpretations, or almost surely takes only values different from 0. However, by means of a more sophisticated analysis, based on appropriate extension properties and on algebraic representations of firstorder formulae, we can prove much stronger results. For many semirings K, the firstorder sentences can be partitioned into classes F(k), such that for each element k of K, every sentence in F(k) evaluates almost surely to k under random semiring interpretations. Further, for finite or infinite lattice semirings, this partition actually collapses to just three classes F(0), F(1) and F(e) of sentences that, respectively, almost surely evaluate to 0, 1, and to the smallest positive value e. For all other values k in K we have that F(k) is empty. The problem of computing the almost sure valuation of a firstorder sentence on finite lattice semirings is PSPACEcomplete. 
14:45  Geometric decision procedures and the VC dimension of linear arithmetic theories PRESENTER: Alessio Mansutti ABSTRACT. This paper resolves two open problems on linear integer arithmetic(LIA), also known as Presburger arithmetic. First, we give a triply exponential geometric decision procedure for LIA, i.e., a procedure based on manipulating semilinear sets. This matches the runningtime of the best quantifier elimination and automatabased procedures. Second, building upon our first result, we give a doubly exponential upper bound on the Vapnik–Chervonenkis (VC) dimension of sets definable in LIA, proving a conjecture of D. Nguyen and I. Pak [Combinatorica 39, pp. 923–932, 2019]. These results partially rely on an analysis of sets definable in linear real arithmetic (LRA), and analogous results for LRA are also obtained. At the core of these developments are new decomposition results for semilinear and Rsemilinear sets, the latter being the sets definable in LRA. These results yield new algorithms to compute the complement of (R)semilinear sets that do not cause a nonelementary blowup when repeatedly combined with procedures for other Boolean operations and projection. The existence of such an algorithm for semilinear sets has been a longstanding open problem. 
15:00  A direct computational interpretation of secondorder arithmetic via update recursion ABSTRACT. Secondorder arithmetic has two kinds of computational interpretations: via Spector's bar recursion of via Girard's polymorphic lambdacalculus. Bar recursion interprets the negative translation of the axiom of choice which, combined with an interpretation of the negative translation of the excluded middle, gives a computational interpretation of the negative translation of the axiom scheme of specification. It is then possible to instantiate universally quantified sets with arbitrary formulas (secondorder elimination). On the other hand, polymorphic lambdacalculus interprets directly secondorder elimination by means of polymorphic types. The present work aims at bridging the gap between these two interpretations by interpreting directly secondorder elimination through update recursion, which is a variant of bar recursion. 
15:15  When Locality Meets Preservation ABSTRACT. This paper investigates the expressiveness of a fragment of firstorder sentences in Gaifman normal form, namely the positive Boolean combinations of basic local sentences. We show that they match exactly the firstorder sentences preserved under local elementary embeddings, thus providing a new general preservation theorem and extending the ŁósTarski Theorem. This full preservation result fails as usual in the finite, and we show furthermore that the naturally related decision problems are undecidable. In the more restricted case of preservation under extensions, it nevertheless yields new wellbehaved classes of finite structures: we show that preservation under extensions holds if and only if it holds locally. 
14:00  Introducing Intel(R) SAT Solver ABSTRACT. We introduce Intel(R) SAT Solver (IntelSAT)  a new opensource CDCL SAT solver, written from scratch. IntelSAT is optimized for applications which generate many mostly satisfiable incremental SAT queries. We apply the following Incremental Lazy Backtracking (ILB) principle: inbetween incremental queries, backtrack only when necessary and to the highest possible decision level. ILB is enabled by a novel reimplication procedure, which can reimply an assigned literal at a lower level without backtracking. Reimplication also helped us to restore the following two properties, lost in the modern solvers with the introduction of chronological backtracking: no assigned literal can be implied at a lower level, conflict analysis always starts with a clause falsified at the lowest possible level. In addition, we apply some new heuristics. Integrating IntelSAT into the MaxSAT solver TTOpenWBOInc resulted in a significant performance boost on incomplete unweighted MaxSAT Evaluation benchmarks and improved the stateoftheart in anytime unweighted MaxSAT solving. 
14:30  Improvements to the Implicit Hitting Set Approach to PseudoBoolean Optimization PRESENTER: Matti Järvisalo ABSTRACT. The development of practical approaches for efficiently reasoning over pseudoBoolean constraints has recently received increasing attention as a natural generalization of SAT solving. Analogously, solvers for seudoBoolean optimization draw inspiration form solving techniques developed for maximum satisfiability (MaxSAT) solving. Recently, first practical solver lifting the implicit hitting set (IHS) approachone of the most efficient approaches in modern MaxSAT solvingto the realm of PBO was developed, employing a PB solver as a core extractor together with an integer programming solver as a hitting set solver. In this work, we make practical improvements to the IHS approach to PBO. We propose the integration of solutionimproving search to the PBOIHS approach, resulting in a hybrid approach to PBO making use of both types of search towards an optimal solution. Furthermore, we explore the potential of variations of core extraction within PBOIHSincluding recent advances in PB core extraction, allowing for extracting more general PB constraints compared to the atleastone constraints typically relied on in IHSin speeding up PBOIHS search. We validate these ideas in the realm of PBOIHS by showing that the empirical efficiency of PBOIHSrecently shown to outperform other specialized PBO solversis further improved by the integration of these techniques. 
15:00  Certified CNF Translations for PseudoBoolean Solving PRESENTER: Andy Oertel ABSTRACT. The dramatic improvements in Boolean satisfiability (SAT) solving since the turn of the millennium have made it possible to leverage stateoftheart conflictdriven clause learning (CDCL) solvers for many combinatorial problems in academia and industry, and the use of proof logging has played a crucial role in increasing the confidence that the results these solvers produce are correct. However, the conjunctive normal form (CNF) format used for SAT proof logging means that it has not been possible to extend guarantees of correctness to the use of SAT solvers for more expressive combinatorial paradigms, where the first step is to translate the input to CNF. In this work, we show how cuttingplanesbased reasoning can provide proof logging for solvers that translate pseudoBoolean (a.k.a. 01 integer linear) decision problems to CNF and then run CDCL. To support a wide range of encodings, we provide a uniform and easily extensible framework for proof logging of CNF translations. We are hopeful that this is just a first step towards providing a unified proof logging approach that will also extend to maximum satisfiability (MaxSAT) solving and pseudoBoolean optimization in general. 
From "ASP modulo CSP" to "ASP modulo X", or how clingcon paved the way for clingo[X] systems
Abstract: Clingcon extends the ASP system Clingo with linear constraints over integers. As such, its series of variants served as a consecutive design study onhow to extend ASP systems with foreign inferences. Meanwhile this has culminated in the generic theory reasoning framework of the ASP system clingo.We will sketch the evolution of this technology, look at its current state, and emphasis the role of semantics in view of a successful outcome.
15:00  PRESENTER: Thomas Seiller ABSTRACT. Implicit Computational Complexity (ICC) drives better understanding of complexity classes, but it also guides the development of resourcesaware languages and static source code analyzers. Among the methods developed, the mwpflow analysis certifies polynomial bounds on the size of the values manipulated by an imperative program. This result is obtained by bounding the transitions between states instead of focusing on states in isolation, as most static analyzers do, and is not concerned with termination or tight bounds on values. Those differences, along with its builtin compositionality, make the mwpflow analysis a good target for determining how ICCinspired techniques diverge compared with more traditional static analysis methods. This paper's contributions are threefold: we finetune the internal machinery of the original analysis to make it tractable in practice; we extend the analysis to function calls and leverage its machinery to compute the result of the analysis efficiently; and we implement the resulting analysis as a lightweight tool to automatically perform datasize analysis of C programs. This documented effort prepares and enables the development of certified complexity analysis, by transforming a costly analysis into a tractable program, that furthermore decorrelates the problem of deciding if a bound exist with the problem of computing it. 
15:00  PRESENTER: Francesco Chiariello ABSTRACT. We propose Answer Set Programming (ASP) as an approach for modeling and solving problems from the area of Declarative Process Mining (DPM). We consider here three classical problems, namely, Log Generation, Conformance Checking, and Query Checking. These problems are addressed from both a controlflow and a dataaware perspective. The approach is based on the representation of process specifications as (finitestate) automata. Since these are strictly more expressive than the defacto DPM standard specification language DECLARE, more general specifications than those typical of DPM can be handled, such as formulas in lineartime temporal logic over finite traces. 
15:15  ABSTRACT. Normal logic programs subject to stable model semantics cover reasoning problems from the first level of polynomial time hierarchy (PH) in a natural way. Disjunctive programs reach one level beyond this, but the access to the underlying NP oracle(s) is somewhat implicit and available for the programmer using the socalled saturation technique. To address this shortcoming, stableunstable semantics was proposed, making oracles explicit as subprograms having no stable models. If this idea is applied recursively, any level of PH can be reached with normal programs only, in analogy to quantified Boolean formulas (QBFs). However, for the moment, no native implementations of stableunstable semantics have emerged except via translations toward QBFs. In this work, we alleviate this situation with a translation of (effectively) normal programs that combines a main program with any fixed number of oracles subject to stableunstable semantics. The result is a disjunctive program that can be fed as input for answer set solvers supporting disjunctive programs. The idea is to hide saturation from the programmer altogether, although it is exploited by the translation internally. The translation of oracles is performed using translators and linkers from the ASPTOOLS collection while Clingo is used as the backend solver. 
Program:
 Mozart Sonata n°3 K. 281 Bflat major
 Liszt Rhapsody n°7
 Gerschwin Three preludes
Bio: François Schwarzentruber is associate professor at école normale supérieure de Rennes. He took piano lessons by Suzanne MullerGunst. He performed Beethoven's piano concertos n° 3 and 4 with the orchestra of institut national polytechnique (INP) of Toulouse,Rhapsody in Blue of Gershwin together with the university orchestra of Rennes. He gave several concerts, in the public library Champs Libres in Rennes, in retirement houses, but also in festivals. He won the "special Mozart and "Amateurs virtuoses" awards at International IledeFrance Piano Competition (amateurs). He also composes, mostly short pieces for piano.