FLOC 2022: FEDERATED LOGIC CONFERENCE 2022
PROGRAM FOR THURSDAY, AUGUST 4TH
Days:
previous day
next day
all days

View: session overviewtalk overview

08:30-09:00Coffee & Refreshments
09:00-10:30 Session 63A: Tutorial (CP)
Location: Taub 7
09:00
MDD-Based Constraint Programming in Haddock
PRESENTER: Laurent Michel

ABSTRACT. Multi-valued decision diagrams (MDDs) have recently emerged as an effective technology for solving combinatorial optimization problems. In this tutorial, we focus on the use of decision diagrams in the context of constraint programming. It consists of two parts: 1) an overview of MDD-based constraint programming, and 2) a description and demonstration of the Haddock system to specify and compile MDD-propagators within a constraint programming solver.

MDD-based constraint propagation was first introduced by Andersen et al. in 2007. They introduced the concept of relaxed decision diagrams as a mechanism to enhance the propagation of information between constraints: Instead of using a domain store, they proposed using an ‘MDD store’ to represent and communicate more structural relationships between variables. As a consequence, the search tree size can be dramatically reduced. Since then, many papers have further studied the use of decision diagrams in constraint programming, as well as in other related areas such as mathematical programming. The first part of this tutorial will describe the core concepts of MDD-based constraint propagation and provide examples on global constraints such as AllDifferent, Among, and Disjunctive.

In the second part, we will describe the system Haddock. It provides a specification language and implementation architecture for automatic decision diagram compilation. Haddock provides the rules for refining (splitting) and filtering (propagating) MDD abstractions. In addition, the language allows to specify heuristics to guide the compilation process. Haddock allows the user to declare multiple MDDs within a CP model, each associated with a suitable set of constraints, and automatically integrates the MDD propagators into the constraint propagation fixpoint computation. We will give an overview of the Haddock system, provide examples on constraints such as Among, Gcc, and (weighted) Sum, and describe example applications. Participants are invited to download the Haddock code to work on hands-on examples during the tutorial.

 

09:00-10:30 Session 63B: SAT (CP)
Location: Taub 4
09:00
On Quantitative Testing of Samplers
PRESENTER: Mate Soos

ABSTRACT. The problem of uniform sampling is, given a formula F , sample solutions of F uniformly at random from the solution space of F . Uniform sampling is a fundamental problem with widespread applications, including configuration testing, bug synthesis, function synthesis, and many more. State-of-the-art approaches for uniform sampling have a trade-off between scalability and theoretical guarantees. Many state of the art uniform samplers do not provide any theoretical guarantees on the distribution of samples generated, however, empirically they have shown promising results. In such cases, the main challenge is to test whether the distribution according to which samples are generated is indeed uniform or not.

Recently, Chakraborty and Meel (2019) designed the first scalable sampling tester, Barbarik, based on a grey-box sampling technique for testing if the distribution, according to which the given sampler is sampling, is close to the uniform or far from uniform. They were able to show that many off-the-self samplers are far from a uniform sampler. The availability of Barbarik increased the test-driven development of samplers. More recently, Golia, Soos, Chakraborty and Meel (2021), designed a uniform like sampler, CMSGen, which was shown to be accepted by Barbarik on all the instances. However, CMSGen does not provide any theoretical analysis of the sampling quality.

CMSGen leads us to observe the need for a tester to provide a qualitative answer to determine the quality of underlying samplers instead of merely a quantitative answer of Accept or Reject. Towards this goal, we design a computational hardness-based tester ScalBarbarik that provides a more nuanced analysis of the quality of a sampler. ScalBarbarik allows more expressive measurement of the quality of the underlying samplers. We empirically show that the state-of-the-art sampler, CMSGen is not accepted as a uniform-like sampler by ScalBarbarik. Furthermore, we show that ScalBarbarik can be used to design a sampler that can achieve balance between scalability and uniformity.

09:30
On the Enumeration of Frequent High Utility Itemsets: A symbolic AI Approach
PRESENTER: Said Jabbour

ABSTRACT. Mining interesting patterns from data is a core part of the data mining world. High utility mining, an active research topic in data mining, aims to discover valuable itemsets with high profit (e.g., cost, risk). However, the measure of interest of an itemset must primarily reflect not only the importance of items in terms of profit, but also their occurrence in data in order to make more crucial decisions. Some proposals are then introduced to deal with the problem of computing high utility itemsets that meet a minimum support threshold. However, in these existing proposals, all transactions in which the itemset appears are taken into account, including those in which the itemset has a low profit. So, no additional information about the overall utility of the itemset is taken into account. This paper addresses this issue by introducing a SAT-based model to efficiently find the set of all frequent high utility itemsets with the use of a minimum utility threshold applied to each transaction in which the itemset appears. More specifically, we reduce the problem of mining frequent high utility itemsets to the one of enumerating the models of a formula in propositional logic, and then we use state-of-the-art SAT solvers to solve it. Afterwards, to make our approach more efficient, we provide a decomposition technique that is particularly suitable for deriving smaller and independent sub-problems easy to resolve. Finally, an extensive experimental evaluation on various popular real-world datasets shows that our method is fast and scale well compared to the state-of-the art algorithms.

10:00
From Crossing-Free Resolution to Max-SAT Resolution

ABSTRACT. Adapting a resolution proof for SAT into a Max-SAT resolution proof without considerably increasing the size of the proof is an open problem. Read-once resolution, where each clause is used at most once in the proof, represents the only fragment of resolution for which an adaptation using exclusively Max-SAT resolution is known and trivial. Proofs containing non read-once clauses are difficult to adapt because the Max-SAT resolution rule replaces the premises by the conclusions. This paper contributes to this open problem by defining, for the first time since the introduction of Max-SAT resolution, a new fragment of resolution whose proofs can be adapted to Max-SAT resolution proofs without substantially increasing their size. In this fragment, called crossing-free resolution, non read-once clauses are used independently to infer new information thus enabling to bring along each non read-once clause while unfolding the proof until a substitute is required.

09:00-10:30 Session 63C: Proof Theory and Linear Logic (FSCD)
09:00
Decision problems for linear logic with least and greatest fixed points
PRESENTER: Abhishek De

ABSTRACT. Linear logic is an important logic for modelling resources and decomposing computational interpretations of proofs. Decision problems for fragments of linear logic exhibiting `infinitary' behaviour (such as exponentials) are notoriously complicated. In this work, we address the decision problems for variations of linear logic with fixed points (muMALL), in particular, recent systems based on `circular' and `non-wellfounded' reasoning. In this paper, we show that muMALL is undecidable.

More explicitly, we show that the general non-wellfounded system is Pi01-hard via a reduction to the non-halting of Minsky machines, and thus is strictly stronger than its circular counterpart (which is in Sigma01). Moreover, we show that the restriction of these systems to theorems with only the least fixed points is already Sigma01-complete via a reduction to the reachability problem of alternating vector addition systems with states. This implies that both the circular system and the finitary system (with explicit (co)induction) are Sigma01-complete.

09:30
Linear lambda-calculus is linear
PRESENTER: Gilles Dowek

ABSTRACT. We prove a linearity theorem for an extension of linear logic with addition and multiplication by a scalar: the proofs of some propositions in this logic are linear in the algebraic sense. This work is part of a wider research program that aims at defining a logic whose proof language is a quantum programming language.

10:00
A Graphical Proof Theory of Logical Time
PRESENTER: Matteo Acclavio

ABSTRACT. Logical time is a partial order over events in distributed systems, constraining which events precede others. Special interest has been given to series-parallel orders since they correspond to formulas constructed via the two operations for "series" and "parallel" composition. For this reason, series-parallel orders have received attention from proof theory, leading to pomset logic, the logic BV, and their extensions. However, logical time does not always form a series-parallel order; indeed, ubiquitous structures in distributed systems are beyond current proof theoretic methods. In this paper, we explore how this restriction can be lifted. We design new logics that work directly on graphs instead of formulas, we develop their proof theory, and we show that our logics are conservative extensions of the logic BV.

09:00-10:00 Session 63D: Invited talk by Manuel Hermenegildo: 50th anniversary of the birth of Prolog: Some reflections on Prolog's Evolution, Status, and Future (ICLP)

50th anniversary of the birth of Prolog: Some reflections on Prolog's Evolution, Status, and Future [LINK TO SLIDES]

Abstract: This year we celebrate Prolog's 50th anniversary, and the continued relevance of Prolog and logic programming as a basis for both higher-level programming and symbolic, explainable AI.  This talk will provide an overview of Prolog's evolution, status, and future. It will start with a quick tour of the major milestones in the advancement of the language and its implementations, from the original Marseille and Edinburgh versions, to the many current ones, with more appearing continuously. This will be followed by some reflections on issues such as what still makes Prolog different and relevant as a language and tool, the best approaches for teaching Prolog, some landmark applications, relation to other logic programming languages, or the use of Prolog and Prolog-related tools for other programming languages.  The talk will also attempt to dispel along the way some common misconceptions about the language, while discussing some past weaknesses and others that may still need addressing. It will conclude with some reflections on challenges and opportunities for the future.

Part of the contents of this talk appear in the recent TPLP paper "50 years of Prolog and Beyond", by Philipp Körner, Michael Leuschel, João Barbosa, Vítor Santos Costa, Verónica Dahl, Manuel V. Hermenegildo, Jose F. Morales, Jan Wielemaker, Daniel Diaz, Salvador Abreu, and Giovanni Ciatto, written for Prolog's 50th anniversary and TPLP's 20th anniversary. See prologyear.logicprogramming.org for pointers to this paper and other initiatives related to the Year of Prolog. The Year of Prolog is organized by the Association for Logic Programming and the Prolog Heritage foundation.

Location: Taub 9
09:00-10:30 Session 63E: Argumentation (KR)
Location: Taub 2
09:00
Rediscovering Argumentation Principles Utilizing Collective Attacks
PRESENTER: Matthias König

ABSTRACT. Argumentation Frameworks (AFs) are a key formalism in AI research. Their semantics have been investigated in terms of principles, which define characteristic properties in order to deliver guidance for analysing established and developing new semantics. Because of the simple structure of AFs, many desired properties hold almost trivially, at the same time hiding interesting concepts behind syntactic notions. We extend the principle-based approach to Argumentation Frameworks with Collective Attacks (SETAFs) and provide a comprehensive overview of common principles for their semantics. Our analysis shows that investigating principles based on decomposing the given SETAF (e.g. directionality or SCC-recursiveness) poses additional challenges in comparison to usual AFs. We introduce the notion of the reduct as well as the modularization principle for SETAFs which will prove beneficial for this kind of investigation. We then demonstrate how our findings can be utilized for incremental computation of extensions and give a novel parameterized tractability result for verifying preferred extensions.

09:30
A Credal Least Undefined Stable Semantics for Probabilistic Logic Programs and Probabilistic Argumentation
PRESENTER: Fabio Cozman

ABSTRACT. We present an approach to probabilistic logic programming and probabilistic argumentation that combines elements of the L-stable semantics and the credal semantics. We derive the complexity of inferences, propose an extended version of argumentation graphs with a semantics that maps to the L- stable semantics, and introduce a definition for the probability of an argument.

10:00
Computing Stable Argumentative Conclusions under the Weakest-Link Principle in the ASPIC+ Framework
PRESENTER: Tuomo Lehtonen

ABSTRACT. The study of computationally hard reasoning tasks in different argumentation formalisms is a topical research direction in KR. Here we focus on ASPIC+ as a central structured argumentation formalism. It has recently been shown that rephrasing argumentation semantics in terms of subsets of defeasible elements allows for gaining new insights for reasoning about acceptance in established fragments of ASPIC$^+$. In this paper we provide a non-trivial generalization of these recent results, capturing preferences as a major ingredient of ASPIC+. In particular, considering preferences under the weakest-link principle, we show that the stable semantics can be phrased in terms of subsets of defeasible elements. We employ the rephrasing for establishing both complexity results and practical algorithms for reasoning about acceptance in this variant of ASPIC+. Justified by completeness for the second level of the polynomial hierarchy, we develop an iterative answer set solving based approach to reasoning about acceptance under the so-called elitist lifting in ASPIC$^+$ frameworks. We also show that an implementation of the approach scales well in practice.

09:00-10:30 Session 63F: Belief Revision (KR)
Location: Taub 3
09:00
On Syntactic Forgetting with Strong Persistence

ABSTRACT. It is generally agreed upon that so-called strong persistence (SP) captures best the essence of forgetting in logic programming. While classes of operators, such as FR and FSP, that satisfy immediate relaxations of (SP), and in the case of FSP, even (SP) whenever possible, have been characterized semantically, many practical questions have yet to be addressed. This technical paper aims to answer one of them: How can atoms be forgotten from a program without having to calculate its exponential number of models. To this end, we introduce two concrete representatives of FR and FSP that forget sets of atoms by syntactical manipulation of a program’s rules. This may in many cases prevent exponential blowups and produce a forgetting result that is close to the original program.

09:30
Kernel Contraction and the Order of Relevance

ABSTRACT. The postulate of relevance provides a suitable and general notion of minimal change for belief contraction. Relevance is tightly connected to smooth kernel contractions when an agent's epistemic state is represented as a logically closed set of formulae. This connection, however, breaks down when an agent's epistemic state is represented as a set of formulae not necessarily logically closed. We investigate the cause behind this rupture, and we reconnect relevance with smooth kernel contractions by constraining the behaviour of their choice mechanisms and epistemic preference relations. Our first representation theorem connects smooth kernel contractions with a novel class of epistemic preference relations. For our second representation theorem, we introduce the principle of symmetry of removal that relates relevance to epistemic choices. For the last theorem, we devise a novel class of smooth kernel contractions, satisfying relevance, which are based on epistemic preference relations that capture the principle of symmetry of removal.

10:00
Who’s the Expert? On Multi-source Belief Change
PRESENTER: Joe Singleton

ABSTRACT. Consider the following belief change/merging scenario. A group of information sources give a sequence of reports about the state of the world at various instances (e.g. different points in time). The true states at these instances are unknown to us. The sources have varying levels of expertise, also unknown to us, and may be knowledgeable on some topics but not others. This may cause sources to report false statements in areas they lack expertise. What should we believe on the basis of these reports? We provide a framework in which to explore this problem, based on an extension of propositional logic with expertise formulas. This extended language allows us to express beliefs about the state of the world at each instance, as well as beliefs about the expertise of each source. We propose several postulates, provide a couple of families of concrete operators, and analyse these operators with respect to the postulates.

09:00-10:30 Session 63G: Linear Logic and Proof Systems (LICS)

"Linear Logic and Proof Systems": 6 papers (12 min presentation + 2-3 min Q&A)

Location: Taub 1
09:00
Linear-Algebraic Models of Linear Logic as Categories of Modules over Sigma-Semirings
PRESENTER: Takeshi Tsukada

ABSTRACT. A number of models of linear logic are based on or closely related to linear algebra, in the sense that morphisms are ``matrices'' over appropriate coefficient sets. Examples include models based on coherence spaces, finiteness spaces and probabilistic coherence spaces, as well as the relational and weighted relational models. This paper introduces a unified framework based on module theory, making the linear algebraic aspect of the above models more explicit. Specifically we consider modules over \emph{\( \Sigma \)-semirings} \( R \), which are ring-like structures with partially-defined countable sums, and show that morphisms in the above models are actually \( R \)-linear map in the standard algebraic sense for appropriate \( R \). An advantage of our algebraic treatment is that the category of \( R \)-modules is locally presentable, from which it easily follows that this category becomes a model of intuitionistic linear logic with the cofree exponential. We then discuss constructions of classical models and show that the above-mentioned models are examples of our constructions.

09:15
Bouncing threads for circular and non-wellfounded proofs
PRESENTER: David Baelde

ABSTRACT. Given that (co)inductive types are naturally modelled as fixed points, it is unsurprising that fixed-point logics are of interest in the study of programming languages, via the Curry-Howard (or proofs-as-programs) correspondence. This motivates investigations of the structural proof-theory of fixed-point logics and of their cut-elimination procedures.

Among the various approaches to proofs in fixed-point logics, circular -- or cyclic -- proofs, are of interest in this regard but suffer from a number of limitations, most notably from a quite restricted use of cuts. Indeed, the validity condition which ensures soundness of non-wellfounded derivations and productivity of their cut-eliminitation prevents some computationally-relevant patterns of cuts. As a result, traditional circular proofs cannot serve as a basis for a theory of (co)recursive programming by lack of compositionality: there are not enough circular proofs and they compose badly.

The present paper addresses some of these limitations by developing the circular and non-wellfounded proof-theory of multiplicative additive linear logic with fixed points (muMALL) beyond the scope of the seminal works of Santocanale and Fortier and of Baelde et al. We make the following contributions: (1) We define bouncing-validity: a new, generalized, validity criterion for muMALL, which takes axioms and cuts into account. (2) We show soundness and cut elimination theorems for bouncing-valid non-wellfounded proofs. As a result, even though bouncing-validity proves the same sequents (or judgments) as before but we have many more valid proofs at our disposal. (3) We illustrate the relevance of bouncing-validity on a number of examples motivated by the Curry-Howard correspondence. (4) Finally, we study the decidability of the criterion in the circular case: we prove it is undecidable in general. (5) However, we identify a hierarchy of decidable sub-criteria, such that any bouncing-valid circular proof is validated by some criterion of the hierarchy.

09:30
On the strength of Sherali-Adams and Nullstellensatz as propositional proof systems
PRESENTER: Ilario Bonacina

ABSTRACT. We characterize the strength of the algebraic proof systems Sherali-Adams (SA) and Nullstellensatz (NS) in terms of Frege-style proof systems. Unlike bounded-depth Frege, SA has polynomial-size proofs of the pigeonhole principle (PHP). A natural question is whether adding PHP to bounded-depth Frege is enough to simulate SA.

We show that SA with unary integer coefficients lies strictly between tree-like depth-1 Frege + PHP and tree-like Resolution. We introduce a weighted version of PHP (wPHP) and we show that SA with integer coefficients lies strictly between tree-like depth-1 Frege + wPHP and Resolution.

Analogue results are shown for NS using the bijective (i.e. onto and functional) pigeonhole principle and a weighted version of it.

09:45
A functorial excursion between algebraic geometry and linear logic

ABSTRACT. The language of Algebraic Geometry combines two complementary and dependent levels of discourse: on the geometric side, schemes define spaces of the same cohesive nature as manifolds ; on the vectorial side, every scheme X comes equipped with a symmetric monoidal category of quasicoherent modules, which may be seen as generalised vector bundles on the scheme X. In this paper, we use the functor of points approach to Algebraic Geometry developed by Grothendieck in the 1970s to establish that every covariant presheaf X on the category of commutative rings --- and in particular every scheme X --- comes equipped ``above it'' with a symmetric monoidal closed category Mod_X of presheaves of modules. This category Mod_X defines moreover a model of intuitionistic linear logic, whose exponential modality is obtained by glueing together in an appropriate way the Sweedler dual construction on ring algebras. The purpose of this work is to establish on firm mathematical foundations the idea that linear logic should be understood as a logic of generalised vector bundles, in the same way as dependent type theory is understood today as a logic of spaces up to homotopy.

10:00
Logical Foundations of Quantitative Equality

ABSTRACT. In quantitative reasoning one compares objects by distances, instead of equivalence relations, so that one can measure how much they are similar, rather than just saying whether they are equivalent or not. In this paper we aim at providing a solid logical ground to quantitative reasoning with distances, using the categorical language of Lawvere's hyperdoctrines. The key idea is to see distances as equality predicates in Linear Logic. We use graded modalitiess to write a resource sensitive substitution rule for equality, which allows us to give it a quantitative meaning by distances. We introduce a deductive calculus for (Graded) Linear Logic with quantitative equality and the notion of Lipschitz doctrine to define a sound and complete categorical semantics of it. We also describe a universal construction of Lipschitz doctrines, which generates examples based for instance on metric spaces and quantitative realisability.

10:15
Exponentials as Substitutions and the Cost of Cut Elimination in Linear Logic

ABSTRACT. This paper introduces the exponential substitution calculus (ESC), a new presentation of cut elimination for IMELL, based on proof terms and building on the idea that exponentials can be seen as explicit substitutions. The idea in itself is not new, but here it is pushed to a new level, inspired by Accattoli and Kesner's linear substitution calculus (LSC).

One of the key properties of the LSC is that it naturally models the sub-term property of abstract machines, that is the key ingredient for the study of reasonable time cost models for the lambda-calculus. The new ESC is then used to design a cut elimination strategy with the sub-term property, providing the first polynomial cost model for cut elimination with unconstrained exponentials.

For the ESC, we also prove untyped confluence and typed strong normalization, showing that it is an alternative to proof nets for an advanced study of cut elimination.

09:00-10:30 Session 63H: 25 years of SAT (SAT)

25-years of SAT

09:00
25 years of SAT: Conflict-Driven SAT Solving

ABSTRACT. The ability of modern SAT solvers to handle huge industrial instances having 100,000,000’s clauses is highly intriguing, if not mysterious. It is well-known that the breakthrough in capacity was enabled by the introduction of the so-called Conflict-Driven-Clause-Learning (CDCL), or, simply, Conflict-driven SAT solving in the late 90s and early 00s. We will start by reviewing the seminal works on the first conflict-driven solvers--GRASP and Chaff--including some lesser-known facts. We will also discuss the crucial Boolean Constraint Propagation (BCP) algorithm and the evolvement of the underlying data structures. Furthermore, we will review later works on chronological backtracking, their relation to the original work in GRASP and the challenges of combining chronological backtracking with efficient BCP. Finally, we will go over algorithms for enhanced clause learning.

09:24
25 years of SAT: Maximum Satisfiability for Real-World Optimization

ABSTRACT. Modern Maximum Satisfiability (MaxSAT) offers an effective tool for solving NP-hard optimization problems arising in various domains. In this talk I will overview the history and development of algorithms and solvers for solving instances of MaxSAT, focusing especially on the so called complete algorithms, geared toward computing provably optimal solutions to instances arising in real-world domains. I will also highlight some interesting future research directions in the field, including the introduction of a new incremental track in the 2022 MaxSAT Evaluation.

09:48
25 years of SAT: Trusting SAT Solvers

ABSTRACT. Many critical applications crucially rely on the correctness of SAT solvers. Particularly in the context of formal verification, the claim by a SAT solver that a formula is unsatisfiable corresponds to a safety or security property to hold, and thus needs to be trusted. In order to increase the level of trust an exciting development in this century was to let SAT solvers produce certificates, i.e., by tracing proofs of unsatisfiability, which can independently be checked. In the last ten years this direction of research gained substantial momentum, e.g., solvers in the main track of the SAT competition are required to produce such certificates and industrial applications of SAT solvers require that feature too. In this talk we review this quarter of century of research in certifying the result of SAT solvers, discuss briefly alternatives, including testing approaches and verifying the SAT solver directly, mention exciting research on new proof systems produced in this context as well as how these ideas extend beyond formulas in conjunctive normal form.

10:12
25 years of SAT: Proof complexity and SAT solving

ABSTRACT. This talk will provide a brief survey on the relations between proof complexity and SAT solving. What can proof complexity tell us about the strength and limitations of SAT solving? Why should practitioners be interested in proof complexity results and why should theorists study SAT solving? What have we achieved in the past 25 years and which problems remain open?

10:00-10:30 Session 64: Implementation & Transformations (ICLP)
Location: Taub 9
10:00
Efficient Datalog Rewriting for Query Answering in TGD Ontologies
PRESENTER: Zhe Wang

ABSTRACT. Tuple-generating dependencies (TGDs or existential rules) are an expressive constraint language for ontology-mediated query answering and thus query answering is of high complexity. Existing systems based on first-order rewriting methods can lead to queries too large for DBMS to handle. It is shown that datalog rewriting can result in more compact queries, yet previously proposed datalog rewriting methods are mostly inefficient for implementation. In this paper, we fill the gap by proposing an efficient datalog rewriting approach for answering conjunctive queries over TGDs, and identify and combine existing fragments of TGDs for which our rewriting method terminates. We implemented a prototype system Drewer, and experiments show that it is able to handle a wide range of benchmarks in the literature. Moreover, Drewer shows superior performance over state-of-the-art systems on both the compactness of rewriting and the efficiency of query answering.

10:15
What do you really want to do? Towards a Theory of Intentions for Human-Robot Collaboration
PRESENTER: Mohan Sridharan

ABSTRACT. The architecture described in this paper encodes a theory of intentions based on the key principles of non-procrastination, persistence, and automatically limiting reasoning to relevant knowledge and observations. The architecture reasons with transition diagrams of any given domain at two different resolutions, with the fine-resolution description defined as a refinement of, and hence tightly-coupled to, a coarse-resolution description. For any given goal, nonmonotonic logical reasoning with the coarse-resolution description computes an activity, i.e., a plan, comprising a sequence of abstract actions to be executed to achieve the goal. Each abstract action is implemented as a sequence of concrete actions by automatically zooming to and reasoning with the part of the fine-resolution transition diagram relevant to the current coarse-resolution transition and the goal. Each concrete action in this sequence is executed using probabilistic models of the uncertainty in sensing and actuation, and the corresponding fine-resolution outcomes are used to infer coarse-resolution observations that are added to the coarse-resolution history. The architecture’s capabilities are evaluated in the context of a simulated robot assisting humans in an office domain, on a physical robot (Baxter) manipulating tabletop objects, and on a wheeled robot (Turtlebot) moving objects to particular places or people. The experimental results indicate improvements in reliability and computational efficiency compared with an architecture that does not include the theory of intentions, and an architecture that does not include zooming for fine-resolution reasoning.

10:30-11:00Coffee Break
11:00-12:30 Session 65A: MDD (CP)
Location: Taub 7
11:00
Heuristics for MDD Propagation in Haddock
PRESENTER: Rebecca Gentzel

ABSTRACT. Haddock, introduced in [11], is a declarative language and architecture for the specification and the implementation of Multi-valued decision diagrams. It relies on a labeled transition system to specify and compose individual constraints into a propagator with filtering capabilities that automatically deliver the expected level of filtering. Yet, the operational potency of the filtering algorithms strongly correlate with heuristics for carrying out refinements of the diagrams. This paper considers how to empower Haddock users with the ability to unintrusively specify various such heuristics and derive the computational benefits of exerting fine-grained control over the refinement process.

11:30
CNF Encodings of Binary Constraint Trees
PRESENTER: Ruiwei Wang

ABSTRACT. Ordered Multi-valued Decision Diagrams (MDDs) have been shown to be useful to represent finite domain functions/relations. For example, various constraints can be modelled with MDD constraints. Recently, a new representation called Binary Constraint Tree (BCT), which is a (special) tree structure binary Constraint Satisfaction Problem, has been proposed to encode MDDs and shown to outperform existing MDD constraint propagators in Constraint Programming solvers. BCT is a compact representation, and it can be exponentially smaller than MDD for representing some constraints. Here, we also show that BCT is compact for representing non-deterministic finite state automaton (NFA) constraints. In this paper, we investigate how to encode BCT into CNF form, making it suitable for SAT solvers. We present and investigate five BCT CNF encodings. We compare the propagation strength of the BCT CNF encodings and experimentally evaluate the encodings on a range of existing benchmarks. We also compare with seven existing CNF encodings of MDD constraints. Experimental results show that the CNF encodings of BCT constraints can outperform those of MDD constraints on various benchmarks.

12:00
Nucleus-Satellites Systems of OMDDs for Reducing the Size of Compiled Forms
PRESENTER: Nicolas Schmidt

ABSTRACT. In order to reduce the size of compiled forms in knowledge compilation, we propose a new approach based on a splitting of the main representation into a nucleus representation and satellites representations. Nucleus representation is the projection of the original representation onto the ``main'' variables and satellite representations define the other variables according to the nucleus. We propose a language and a method, aimed at OBDD/OMDD representations, to compile into this split form. Our experimental study shows major size reductions on configuration- and diagnosis- oriented benchmarks.

11:00-12:30 Session 65B: Constraint and Conjecture Acquisition (CP)
Location: Taub 4
11:00
Learning Constraint Programming Models from Data using Generate-and-Aggregate
PRESENTER: Samuel Kolb

ABSTRACT. Constraint programming (CP) is used widely for solving real-world problems. However, designing these models require substantial expertise. In this paper, we tackle this problem by synthesising models automatically from past solutions. We introduce COUNT-CP, which uses simple grammars and a generate-and-aggregate approach to learn expressive first-order constraints typically used in CP as well as their parameters from data. The learned constraints generalise across instances over different sizes and can be used to solve unseen instances -- e.g., learning constraints from a 4 X 4 Sudoku to solve a 9 X 9 Sudoku or learning nurse staffing requirements across hospitals. COUNT-CP is implemented using the CPMpy constraint programming and modelling environment to produce constraints with nested mathematical expressions. The method is empirically evaluated on a set of suitable benchmark problems and shows to learn accurate and compact models quickly.

11:30
Constraint Acquisition Based on Solution Counting

ABSTRACT. We propose CABSC, a system that performs Constraint Acquisition Based on Solution Counting. In order to learn a Constraint Satisfaction Problem (CSP), the user provides positive examples and a Meta-CSP, i.e. a model of a combinatorial problem whose solution is a CSP. This Meta-CSP allows listing the potential constraints that can be part of the CSP the user wants to learn. It also allows stating the parameters of the constraints, such as the coefficients of a linear equation, and imposing constraints over these parameters. The CABSC reads the Meta-CSP using an augmented version of the language MiniZinc and returns the CSP that accepts the fewest solutions among the CSPs accepting all positive examples. This is done using a branch and bound where the bounding mechanism makes use of a model counter. Experiments show that CABSC is successful at learning constraints and their parameters from positive examples.

12:00
Acquiring Maps of Interrelated Conjectures on Sharp Bounds

ABSTRACT. To automate the discovery of conjectures on combinatorial objects, we introduce the concept of a map of sharp bounds on characteristics of combinatorial objects, that provides a set of interrelated sharp bounds for these combinatorial objects. We then describe a Bound Seeker, a CP-based system, that gradually acquires maps of conjectures. The system was tested for searching conjectures on bounds on characteristics of digraphs: it constructs sixteen maps involving 431 conjectures on sharp lower and upper-bounds on eight digraph characteristics.

11:00-12:30 Session 65C: Type Theory and Logical Frameworks (FSCD)
Chair:
11:00
A stratified approach to Lob induction
PRESENTER: Daniel Gratzer

ABSTRACT. Guarded type theory extends type theory with a handful of modalities and constants to encode productive recursion. While these theories have seen widespread use, the metatheory of guarded type theories, particularly guarded dependent type theories remains underdeveloped. We show that integrating Lob induction is the key obstruction to unifying guarded recursion and dependence in a well-behaved type theory and prove a no-go theorem sharply bounding such type theories.

Based on these results, we introduce GTT: a stratified guarded type theory. GTT is properly two type theories, static GTT and dynamic GTT. The former contains only propositional rules governing Lob induction but enjoys decidable type-checking while the latter extends the former with definitional equalities. Accordingly, dynamic GTT does not have decidable type-checking. We prove, however, a novel guarded canonicity theorem for dynamic GTT, showing that programs in dynamic GTT can be run. These two type theories work in concert, with users writing programs in static GTT and running them in dynamic GTT.

11:30
Encoding type universes without using matching modulo associativity and commutativity

ABSTRACT. The encoding of proof systems and type theories in logical frameworks is key to allow the translation of proofs from one system to the other. The λΠ-calculus modulo rewriting is a powerful logical framework in which various systems have already been encoded, including type systems with an infinite hierarchy of type universes equipped with a unary successor operator and a binary max operator: Matita, Coq, Agda and Lean. However, to decide the word problem in this max-successor algebra, all the encodings proposed so far use rewriting with matching modulo associativity and commutativity (AC), which is of high complexity and difficult to integrate in usual algorithms for β-reduction and type-checking. In this paper, we show that we do not need matching modulo AC by enforcing terms to be in some special canonical form wrt associativity and commutativity, and by using rewriting rules taking advantage of this canonical form. This work has been implemented in the proof assistant Lambdapi.

12:00
Adequate and computational encodings in the logical framework Dedukti

ABSTRACT. Dedukti is a very expressive logical framework which unlike most frameworks, such as the Edinburgh Logical Framework (ELF), allows for the representation of computation alongside deduction. However, unlike ELF encodings, Dedukti encodings proposed until now do not feature an adequacy theorem --- i.e., a bijection between terms in the encoded system and in its encoding. Moreover, many of them also do not have a conservativity one, which compromises the ability of Dedukti to check proofs written in such encodings. We propose a different approach for Dedukti encodings which do not only allow for simpler conservativity proofs, but which also restore the adequacy of encodings. More precisely, we propose in this work adequate (and thus conservative) encodings for Functional Pure Type Systems. However, in contrast with ELF encodings, ours is computational --- that is, represents computation directly as computation. Therefore, our work is the first to present and prove correct an approach allowing for encodings that are both adequate and computational in Dedukti.

11:00-12:30 Session 65D: Functional Logic Programming, Datalog & Machine Learning (ICLP)
Location: Taub 9
11:00
From Logic to Functional Logic Programs

ABSTRACT. Logic programming is a flexible programming paradigm due to use of predicates without a fixed data flow. To extend logic languages with the compact notation of functional programming, there are various proposals to map evaluable functions into predicates in order to stay in the logic programming framework. Since amalgamated functional logic languages offer flexible as well as efficient evaluation strategies, we propose an opposite approach in this paper. By mapping logic programs into functional logic programs with a transformation based on inferring functional dependencies, we develop a fully automatic transformation which keeps the flexibility of logic programming but can improve computations by reducing infinite search spaces to finite ones.

11:22
MV-Datalog+/-: Effective Rule-based Reasoning with Uncertain Observations (Best Paper Award)

ABSTRACT. Modern applications combine information from a great variety of sources. Oftentimes, some of these sources, like Machine-Learning systems, are not strictly binary but associated with some degree of (lack of) confidence in the observation. We propose MV-Datalog and MV-Datalog± as extensions of Datalog and Datalog±, respectively, to the fuzzy semantics of infinite-valued Lukasiewicz logic L as languages for effectively reasoning in scenarios where such uncertain observations occur. We show that the semantics of MV-Datalog exhibits similar model-theoretic properties as Datalog. in particular, we show that (fuzzy) entailment can be defined in terms of an analogue of minimal models and give a characterisation, and proof of the uniqueness of such minimal models. On the basis of this characterisation, we propose similar many-valued semantics for rules with existential quantification in the head, extending Datalog±.

11:44
Jumping Evaluation of Nested Regular Path Queries
PRESENTER: Joachim Niehren

ABSTRACT. The propositional dynamic logic is a fundamental language that provides nested regular path queries for datagraphs, as needed for querying graph databases and RDF triple stores. We pro- pose a new algorithm for evaluating nested regular path queries. Not only does it evaluate path queries on datagraphs from a set of start nodes in combined linear time, but also this complex- ity bound depends only on the size of the query’s top-down needed subgraph, a notion that we introduce formally. For many queries relevant in practice, the top-down neeeded subgraph is way smaller than the whole datagraph. Our algorithm is based on a new compilation schema from nested regular path queries to monadic datalog queries that we introduce. We prove that the top-down evaluation of the datalog program visits only the top-down needed subgraph for the path query. Thereby, the combined linear time complexity depending on the size of the top-down needed subgraph is implied by a general complexity result for top-down datalog evaluation (Tekle and Liu 2010). As an application, we show that our algorithm permits to reformulate in simple terms a variant of a very efficient automata-based algorithm proposed by Maneth and Nguyen that evaluates navigational path queries in datatrees based on indexes and jumping. Moreover, our variant overcomes some limitations of Maneth and Nguyen’s: it is not bound to trees and applies to graphs; it is not limited to forward navigational XPath but can treat any nested regular path query and it can be implemented efficiently without any specialized or dedicated techniques, by simply using any efficient datalog evaluator.

12:06
FOLD-RM: A Scalable and Efficient Inductive Learning Algorithm for Multi-Category Classification of Mixed Data
PRESENTER: Gopal Gupta

ABSTRACT. FOLD-RM is an automated inductive learning algorithm for learning default rules for mixed (numerical and categorical) data. It generates an (explainable) answer set programming (ASP) rule set for multi-category classification tasks while maintaining efficiency and scalability. The FOLD-RM algorithm is competitive in performance with the widely-used XGBoost algorithm, however, unlike XGBoost, the FOLD-RM algorithm produces an explainable model. FOLD-RM outperforms XGBoost on some datasets, particularly large ones. FOLD-RM also provides human-friendly explanations for predictions as well as natural language translation of the default rules learned.

11:00-12:00 Session 65E: Invited Talk (KR)
Location: Taub 2
11:00
Graph Queries: Do We Study What Matters?

ABSTRACT. Graph queries are studied extensively by many communities: data management, AI, Semantic Web. The development of the theoretical models of such queries happened in between the two eras of prominence of graphs: the early network data model (later overridden by relational) and more recent models of RDF and especially property graphs, gaining prominence in the last decade. Classical theory gives us the fundamental notion of Regular Path Queries (RPQ) and its many derivatives: CRPQs, UCRPQs, 2UCRPQs, ECRPQs, RDPQs, GXPath etc. This is still the model that dominates in research literature.

Applications follow a different path however. A number of successful graph query languages including Cypher, PGQL, and GSQL led to the development of a new international standard called GQL (Graph Query Language). The core of GQL is inspired by RPQs in the same way the Hoover dam is inspired by a beaver blocking a local creek: the essence is similar, but the end result is not. However, GQL is still work in progress and even when finished it will come in the shape of hundreds of pages of specs, rather than a simple mathematical definition. The goal of this talk is to give such a definition to the community, and issue a call to change the main language of study to reflect graph languages that will dominate the practical landscape for decades to come.

11:00-12:30 Session 65F: Graphs, Behavioural Equivalences and Learning (LICS)

"Graphs, Behavioural Equivalences and Learning": 6 papers (12 min presentation + 2-3 min Q&A)

Location: Taub 1
11:00
Treelike decompositions for transductions of sparse graphs
PRESENTER: Sandra Kiefer

ABSTRACT. We give new decomposition theorems for classes of graphs that can be transduced in first-order logic from classes of sparse graphs --- more precisely, from classes of bounded expansion and from nowhere dense classes. In both cases, the decomposition takes the form of a single colored rooted tree of bounded depth where, in addition, there can be links between nodes that are not related in the tree. The constraint is that the structure formed by the tree and the links has to be sparse. Using the decomposition theorem for transductions of nowhere dense classes, we show that they admit low-shrubdepth covers of size O(n^ε), where n is the vertex count and ε>0 is any fixed real. This solves an open problem posed by Gajarský et al. (ACM TOCL '20) and also by Briański et al. (SIDMA '21).

11:15
Stable graphs of bounded twin-width
PRESENTER: Jakub Gajarský

ABSTRACT. We prove that every class of graphs C that is monadically stable and has bounded twin-width can be transduced from some class with bounded sparse twin-width. This generalizes analogous results for classes of bounded linear cliquewidth [Nešetřil et al., SODA 2020] and of bounded cliquewidth [Nešetřil et al., SODA 2021]. It also implies that monadically stable classes of bounded twin-width are linearly chi-bounded.

11:30
Model Checking on Interpretations of Classes of Bounded Local Cliquewidth

ABSTRACT. An interpretation is an operation that maps an input graph to an output graph by redefinig its edge relation using a first-order formula. This rich framework includes operations such as taking the complement or a fixed power of a graph as (very) special cases.

We prove that there is an FPT algorithm for the first-order model checking problem on classes of graphs which are interpretable in classes of graphs with bounded local cliquewidth. Notably, this includes interpretations of planar graphs and classes of bounded genus in general.

To obtain this result we develop a new tool which works in a very general setting of NIP classes and which we believe can be an important ingredient in obtaining similar results in the future.

11:45
Size measures and alphabetic equivalence in the mu-calculus
PRESENTER: Johannes Marti

ABSTRACT. Algorithms for solving computational problems related to the modal mu-calculus generally do not take the formulas themselves as input, but operate on some kind of representation of formulas. This representation is usually based on a graph structure that one may associate with a mu-calculus formula. Recent work by Kupke, Marti and Venema showed that the operation of renaming bound variables may incur an exponential blow-up of the size of such a graph representation. Their example revealed the undesirable situation that standard constructions, on which algorithms for model checking and satisfiability depend, are sensitive to the specific choice of bound variables used in a formula.

Our work discusses how the notion of alphabetic equivalence interacts with the construction of graph representations of mu-calculus formulas, and with the induced size measures of formulas. We introduce the condition of alpha-invariance on such constructions, requiring that alphabetically equivalent formulas are given the same (or isomorphic) graph representations.

Our main results are the following. First we show that if two mu-calculus formulas are alpha-equivalent, then their respective Fischer-Ladner closures have the same cardinality, up to alpha-equivalence. We then continue with the definition of an alpha-invariant construction which represents an arbitrary mu-calculus formula by a graph that has exactly the size of the quotient of the closure of the formula, up to alpha-equivalence. This definition, which is itself based on a renaming of variables, solves the above-mentioned problem discovered by Kupke et al.

12:00
Milner's Proof System for Regular Expressions Modulo Bisimilarity is Complete

ABSTRACT. Milner (1984) defined a process semantics for regular expressions. He formulated a sound proof system for bisimilarity of process interpretations of regular expressions, and asked whether this system is complete.

We report conceptually on a proof that establishes that Milner's system is complete, by motivating and describing all of its main steps. We substantially refine the completeness proof by Grabmayer and Fokkink (2020) for the restriction of Milner's system to `1-free' regular expressions. As a crucial complication we recognize that process graphs with empty-step transitions that satisfy the layered loop-existence/elimination property LLEE are not closed under bisimulation collapse (unlike process graphs with LLEE that only have proper-step transitions). We circumnavigate this obstacle by defining a LLEE-preserving `crystallization procedure' for such process graphs. By that we obtain `near-collapsed' process graphs with LLEE whose strongly connected components are either collapsed or of `twin-crystal' shape. Such near-collapsed process graphs guarantee provable solutions for bisimulation collapses of process interpretations of regular expressions.

12:15
Computable PAC Learning of Continuous Features
PRESENTER: Cameron Freer

ABSTRACT. We introduce definitions of computable PAC learning for binary classification over computable metric spaces. We provide sufficient conditions on a hypothesis class to ensure than an empirical risk minimizer (ERM) be computable, and bound the strong Weihrauch degree of an ERM under more general conditions. We also give a presentation of a hypothesis class that does not admit any proper computable PAC learner with computable sample function, despite the underlying class being PAC learnable.

11:00-12:30 Session 65G: Machine-Learning for SAT/SMT (SAT)
11:00
25 years of SAT: Modern SAT Techniques / Remembering Hans van Maaren

ABSTRACT. The talk will have two parts:

1. Modern SAT solvers combine rewriting techniques with conflict-driven clause learning (CDCL) search to maximize performance. Rewriting techniques improve the encoding of problems and they can further simplify formulas after CDCL search found useful clauses. The most effective rewriting technique is bounded variable elimination combined with subsumption. Together they reduce the size of industrial problems substantially. Other powerful rewriting techniques are shrinking clauses (vivification) and eliminating redundant clauses. Each rewriting technique can enable simplifications by other rewriting techniques. The order in which they are applied impacts the effectiveness.

Novel value selection heuristics are also key to modern SAT solving. Assigning decision variables to the last implied value facilitates rapid restarts to improve the quality of learning while staying in a similar part of the search space. Additionally, the search is guided toward a solution using assignments obtained by local search techniques. These heuristic improvements boosted modern SAT solver performance on satisfiable formulas.

2. Remembering Hans van Maaren

11:30
On the performance of deep generative models of realistic SAT instances

ABSTRACT. Generating realistic random SAT instances ---random SAT formulas with computational characteristics similar to the ones of application SAT benchmarks--- is a challenging problem in order to understand the success of modern SAT solvers solving this kind of problems. Traditional approaches are based on probabilistic models, where a probability distribution characterizes the occurrences of variables into clauses in order to mimic a certain feature exhibited in most application formulas (e.g., community structure), but they may be unable to reproduce others. Alternatively, deep generative models have been recently proposed to address this problem. The goal of these models is to learn the whole structure of the formula without focusing on any predefined feature, in order to reproduce all its computational characteristics at once. In this work, we propose two new deep generative models of realistic SAT instances, and carry out an exhaustive experimental evaluation of these and other existing models in order to analyze their performances. Our results show that models based on graph convolutional networks, possibly enhanced with edge features, return the best results in terms of structural properties and SAT solver performance.

12:00
Towards Learning Quantifier Instantiation in SMT
PRESENTER: Mikolas Janota

ABSTRACT. This paper applies machine learning (ML) to solve quantified SMT problems more efficiently. The motivating idea is that the solver should learn from already solved formulas to solve new ones. This is especially relevant in classes of similar formulas.

We focus on the enumerative instantiation---a well-established approach to solving quantified formulas anchored in the Herbrand's theorem. The task is to select the right ground terms to be instantiated. In ML parlance, this means learning to rank ground terms. We devise a series of features of the considered terms and train on them using boosted decision trees. In particular, we integrate the LightGBM library into the SMT solver CVC5.

The experimental results demonstrate that ML-guided enables us to solve more formulas and reduce the number of quantifier instantiations.

12:00-12:30 Session 66A: Automated Reasoning (KR)
Location: Taub 2
12:00
Automating Reasoning with Standpoint Logic via Nested Sequents

ABSTRACT. Standpoint logic is a recently proposed formalism in the context of knowledge integration, which advocates a multi-perspective approach permitting reasoning with a selection of diverse and possibly conflicting standpoints rather than forcing their unification. In this paper, we introduce nested sequent calculi for propositional standpoint logics---proof systems that manipulate trees whose nodes are multisets of formulae---and show how to automate standpoint reasoning by means of non-deterministic proof-search algorithms. To obtain worst-case complexity-optimal proof-search, we introduce a novel technique in the context of nested sequents, referred to as "coloring," which consists of taking a standpoint formula as input, guessing a certain coloring of its subformulae, and then running proof-search in a nested sequent calculus on the colored input. Our technique lets us decide the validity of standpoint formulae in CoNP since proof-search only produces a "partial" proof relative to each permitted coloring of the input. We show how all partial proofs can be fused together to construct a complete proof when the input is valid, and how certain partial proofs can be transformed into a counter-model when the input is invalid. These "certificates" (i.e. proofs and counter-models) serve as explanations of the (in)validity of the input.

12:00-12:30 Session 66B: Strategic Reasoning (KR)
Location: Taub 3
12:00
Private and public affairs in strategic reasoning
PRESENTER: Aniello Murano

ABSTRACT. Do agents know each others’ strategies? In multi-process software construction, each process has access to the processes already constructed; but in typical human-robot interactions, a human may not announce its strategy to the robot (indeed, the human may not even know their own strategy). This question has often been overlooked when modeling and reasoning about multi-agent systems. Recently, the impact of this distinction on epistemic reasoning was studied. In this work, we study how it impacts strategic reasoning.

To do so we consider Strategy Logic (SL), a well-established and highly expressive logic for strategic reasoning. Its usual semantics, which we call “white-box semantics”, models systems in which agents “broadcast” their strategies. By adding imperfect information to the evaluation games for the usual semantics, we obtain a new semantics called “black-box semantics”, in which agents keep their strategies private. We consider the model-checking problem and show that the black-box semantics has much lower complexity than white-box semantics for an important fragment of Strategy Logic.

12:30-14:00Lunch Break

Lunch will be held in Taub lobby (CP, LICS, ICLP) and in The Grand Water Research Institute (KR, FSCD, SAT).

14:00-15:30 Session 67A: Tutorial + Benchmarking (CP)

14:00-15:00: Tutorial

15:00-15:30: Benchmarking

Location: Taub 7
14:00
Formal Explainable AI

ABSTRACT. Explainable artificial intelligence (XAI) represents arguably one of the most crucial challenges being faced by the area of AI these days. Although the majority of approaches to XAI are of heuristic nature, recent work proposed the use of abductive reasoning to computing provably correct explanations for machine learning (ML) predictions. The proposed rigorous approach was shown to be useful not only for computing trustable explanations but also for reasoning about explanations computed heuristically. It was also applied to uncover a close relationship between XAI and verification of ML models. This talk will overview the advances of the formal approach to XAI as well as the use of reasoning in devising interpretable rule-based ML models including decision trees, decision sets, and decision lists.

15:00
A framework for generating informative benchmark instances
PRESENTER: Nguyen Dang

ABSTRACT. Benchmarking is an important tool for assessing the relative performance of alternative solving approaches. However, the utility of benchmarking is limited by the quantity and quality of the available problem instances. Modern constraint programming languages typically allow the specification of a class-level model that is parameterised over instance data. This separation presents an opportunity for automated approaches to generate instance data that define instances that are graded (solvable at a certain difficulty level for a solver) or can discriminate between two solving approaches. In this paper, we introduce a framework that combines these two properties to generate a large number of benchmark instances, purposely generated for effective and informative benchmarking. We use five problem classes that were used in the MiniZinc competition to demonstrate the usage of our framework. In addition to producing a ranking among solvers, our framework gives a broader understanding of the behaviour of each solver for the whole instance space; for example by finding subsets of instances where the solver performance significantly varies from its average performance.

14:00-15:30 Session 67B: Planning + DCOP (CP)

14:00-15:00: Planning

15:00-15:30: DCOP

Location: Taub 4
14:00
Isomorphisms between STRIPS problems and sub-problems
PRESENTER: Arnaud Lequen

ABSTRACT. Determining whether two STRIPS planning instances are isomorphic is the simplest form of comparison between planning instances. It is also a particular case of the problem concerned with finding an isomorphism between a planning instance P and a sub-instance of another instance P'. One application of such an isomorphism is to efficiently produce a compiled form containing all solutions to P from a compiled form containing all solutions to P'. In this paper, we study the complexity of both problems. We show that the former is GI-complete, and can thus be solved, in theory, in quasi-polynomial time. While we prove the latter to be NP-complete, we propose an algorithm to build an isomorphism, when possible. We report extensive experimental trials on benchmark problems which demonstrate conclusively that applying constraint propagation in preprocessing can greatly improve the efficiency of a SAT solver.

14:30
Plotting: A Planning Problem With Complex Transitions

ABSTRACT. In this work we focus on a planning problem based on Plotting, a tile-matching puzzle video game published by Taito in 1989. The objective of the game is to remove at least a certain number of coloured blocks from a grid by sequentially shooting blocks into the same grid. The interest and the difficulty of Plotting is due to the complex transitions after every shot: various blocks are affected directly, while others can be indirectly affected by gravity. We highlight the difficulties and inefficiencies of modelling and solving Plotting using PDDL, the de-facto standard language for AI planners. We also provide two constraint models that are able to capture the inherent complexities of the problem. In addition, we provide a set of benchmark instances, an instance generator and an extensive experimental comparison demonstrating solving performance with SAT, CP, MIP and a state-of-the-art AI planner.

15:00
Completeness Matters: Towards Efficient Caching in Tree-based Synchronous Backtracking Search for DCOPs
PRESENTER: Jie Wang

ABSTRACT. Tree-based backtracking search is an important technique to solve Distributed Constraint optimization Problems (DCOPs), where agents cooperatively exhaust the search space by branching on each variable to divide subproblems and reporting the results to their parent after solving each subproblem. Therefore, effectively reusing the historical search results can avoid unnecessary resolutions and substantially reduce the overall overhead. However, the existing caching schemes for asynchronous algorithms cannot be applied directly to synchronous ones, in the sense that child agent reports the lower and upper bound rather than the precise cost of exploration. In addition, the existing caching scheme for synchronous algorithms has the shortcomings of incompleteness and low cache utilization. Therefore, we propose a new caching scheme for tree-based synchronous backtracking search, named Retention Scheme (RS). It utilizes the upper bounds of subproblems which avoid the reuse of suboptimal solutions to ensure the completeness, and deploys a fine-grained cache information unit targeted at each child agent to improve the cache-hit rate. Furthermore, we introduce two new cache replacement schemes to further improve performance when the memory is limited. Finally, we theoretically prove the completeness of our method and empirically show its superiority.

14:00-15:00 Session 67C: FSCD Invited Speaker (FSCD)
14:00
A Methodology for Designing Proof Search Calculi for Non-Classical Logics
14:00-14:30 Session 67D: Best DC Paper (Alice Tarzariol): A Model-Oriented Approach for Lifting Symmetries in Answer Set Programming (ICLP)

When solving combinatorial problems, pruning symmetric solution candidates from the search space is essential. Most of the existing approaches are instance-specific and focus on the automatic computation of Symmetry Breaking Constraints (SBCs) for each given problem instance. However, the application of such approaches to large-scale instances or advanced problem encodings might be problematic since the computed SBCs are propositional and, therefore, can neither be meaningfully interpreted nor transferred to other instances. As a result, a time-consuming recomputation of SBCs must be done before every invocation of a solver.

To overcome these limitations, we introduce a new model-oriented approach for Answer Set Programming that lifts the SBCs of small problem instances into a set of interpretable first-order constraints using a form of machine learning called Inductive Logic Programming. After targeting simple combinatorial problems, we aim to extend our method to be applied also for advanced decision and optimization problems.

Location: Taub 9
14:00-15:30 Session 67E: Systems & Robotics (KR)
Location: Taub 2
14:00
Online Grounding of Symbolic Planning Domains in Unknown Environments
PRESENTER: Leonardo Lamanna

ABSTRACT. To effectively use an abstract (PDDL) planning domain to achieve goals in an unknown environment, an agent must instantiate such a domain with the objects of the environment and their properties. If the agent has an egocentric and partial view of the environment, it needs to act, sense, and abstract the perceived data in the planning domain. Furthermore, the agent needs to compile the plans computed by a symbolic planner into low level actions executable by its actuators. This paper proposes a framework that aims to accomplish the aforementioned perspective and allows an agent to perform different tasks. For this purpose, we integrate machine learning models to abstract the sensory data, symbolic planning for goal achievement and path planning for navigation. We evaluate the proposed method in accurate simulated environments, where the sensors are RGB-D on-board camera, GPS and compass.

14:30
Stream Reasoning with Cycles

ABSTRACT. Temporal specifications, such as those found in multi-agent systems, often include cyclic dependencies. Moreover, there is an increasing need to evaluate such specifications in an online manner, upon streaming data. Consider, for example, the online computation of the normative positions of the agents engaging in an e-commerce protocol. We present a formal computational framework that deals with cyclic dependencies in an efficient way. Moreover, we demonstrate the effectiveness of our framework on large synthetic and real data streams, from the fields of multi-agent systems and composite event recognition.

15:00
Symbolic Knowledge Extraction from Opaque Machine Learning Predictors: GridREx & PEDRO
PRESENTER: Roberta Calegari

ABSTRACT. Procedures aimed at explaining outcomes and behaviour of opaque predictors are becoming more and more essential as machine learning (ML) black-box (BB) models pervade a wide variety of fields and, in particular, critical ones - e.g., medical or financial -, where it is not possible to make decisions on the basis of a blind automatic prediction. A growing number of methods designed to overcome this BB limitation is present in the literature, however some ML tasks are nearly or completely neglected-e.g., regression and clustering. Furthermore, existing techniques may be not applicable in complex real-world scenarios or they can affect the output predictions with undesired artefacts.

In this paper we present the design and the implementation of GridREx, a pedagogical algorithm to extract knowledge from black-box regressors, along with PEDRO, an optimisation procedure to automate the GridREx hyper-parameter tuning phase with better results than manual tuning. We also report the results of our experiments involving the application of GridREx and PEDRO in real case scenarios, including GridREx performance assessment by using as benchmarks other similar state-of-the-art techniques. GridREx proved to be able to give more concise explanations with higher fidelity and predictive capabilities.

14:00-15:30 Session 67F: Belief Revision/RDFS (KR)
Location: Taub 3
14:00
A Minimal Deductive System for RDFS with Negative Statements
PRESENTER: Umberto Straccia

ABSTRACT. It is well-known that the triple language RDFS (Resource Description Framework Schema) is designed to represent and reason with positive statements only (e.g., “antipyretics are drugs”).

In this paper we show how to extend RDFS to express and reason with various forms of negative statements under the Open World Assumption (OWA). To do so, we start from ρdf, a minimal, but significant RDFS fragment that covers all essential features of RDFS, and then extend it to ρdf⊥¬, allowing express also statements such as “radio therapies are non drug treatments”, “Ebola has no treatment”, or ”opioids and antipyretics are disjoint classes”.

The main and, to the best of our knowledge, unique features of our proposal are: (i) ρdf⊥¬ remains syntactically a triple language by extending ρdf with new symbols with specific semantics and there is no need to revert to the reification method to represent negative triples; (ii) the logic is defined in such a way that any RDFS reasoner/store may handle the new predicates as ordinary terms if it does not want to take account of the extra capabilities; (iii) despite negated statements, every ρdf⊥¬ knowledge base is satisfiable; (iv) the ρdf⊥¬ entailment decision procedure is obtained from ρdf via additional inference rules favouring a potential implementation; and (v) deciding entailment in ρdf⊥¬ ranges from P to NP.

14:30
Hyperintensional Partial Meet Contractions

ABSTRACT. Formal frameworks for Epistemology need to have enough logical structure to enable interesting conclusions regarding epistemic phenomena and be expressive enough to model competing positions in the philosophical and logical literature. While beliefs are commonly accepted as hyperintensional attitudes, i.e., epistemic attitudes which may differ even towards necessarily equivalent sentences, most work on standard epistemic logic has relied on idealised and intensional agents. This is particularly true in the area of AGM-inspired Belief Change. Although a few recent studies investigate hyperintensional models of belief change, few have been well connected to the AGM framework, the main paradigm in the area. This work investigates hyperintensional notions of belief base contraction and belief set contraction, as studied in the AGM framework, and its connections to partial meet contractions. We also provide suitable representation theorems, characterising the constructions by means of rationality postulates.

15:00
On the Representation of Darwiche and Pearl’s Epistemic States for Iterated Belief Revision
PRESENTER: Nicolas Schwind

ABSTRACT. The seminal characterization of iterated belief revision was proposed by Darwiche and Pearl, which uses an abstract notion of epistemic states. In this work we look for a canonical representation of these epistemic states. Total preorders are not expressive enough to be used as such a canonical representation. Actually, we show that some operators can even not be represented on a countable epistemic space. Nonetheless, under a very reasonable assumption on the epistemic space, we show that OCFs (Ordinal Conditional Functions) can be considered as a canonical representation.

14:00-15:30 Session 67G: FOL, SOL and Model Theory (LICS)

"FOL, SOL and Model Theory": 6 papers (12 min presentation + 2-3 min Q&A)

Location: Taub 1
14:00
The amazing mixed polynomial closure and its applications to two-variable first-order logic

ABSTRACT. Polynomial closure is a standard operator which is applied to a class of regular languages. In the paper, we investigate three restrictions called left (LPol), right (RPol) and mixed polynomial closure (MPol). The first two were known while MPol is new. We look at two decision problems that are defined for every class C. Membership takes a regular language as input and asks if it belongs to C. Separation takes two regular languages as input and asks if there exists a third language in C including the first one and disjoint from the second. We prove that LPol, RPol and MPol preserve the decidability of membership under mild hypotheses on the input class, and the decidability of separation under much stronger hypotheses. We apply these results to natural hierarchies.

First, we look at several language theoretic hierarchies that are built by applying LPol, RPol and MPol recursively to a single input class. We prove that these hierarchies can actually be defined using almost exclusively MPol. We also consider quantifier alternation hierarchies for two-variable first-order logic (FO2) and prove that one can climb them using MPol. The result is generic in the sense that it holds for most standard choices of signatures. We use it to prove that for most of these choices, membership is decidable for all levels in the hierarchy. Finally, we prove that separation is decidable for the hierarchy of two-variable first-order logic equipped with only the linear order (FO2(<)).

14:15
The Regular Languages of First-Order Logic with One Alternation
PRESENTER: Corentin Barloy

ABSTRACT. The regular languages with a neutral letter expressible in first-order logic with one alternation are characterized. Specifically, it is shown that if an arbitrary \(\Sigma_2\) formula defines a regular language with a neutral letter, then there is an equivalent \(\Sigma_2\) formula that only uses the order predicate. This shows that the so-called Central Conjecture of Straubing holds for \(\Sigma_2\) over languages with a neutral letter, the first progress on the Conjecture in more than 20 years. To show the characterization, lower bounds against polynomial-size depth-3 Boolean circuits with constant top fan-in are developed. The heart of the combinatorial argument resides in studying how positions within a language are determined from one another, a technique of independent interest.

14:30
Zero-One Laws and Almost Sure Valuations of First-Order Logic in Semiring Semantics
PRESENTER: Matthias Naaf

ABSTRACT. Semiring semantics evaluates logical statements by values in some commutative semiring K. Random semiring interpretations, induced by a probability distribution on K, generalise random structures, and we investigate here the question of how classical results on first-order logic on random structures, most importantly the 0-1 laws of Glebskii et al. and Fagin, generalise to semiring semantics. For positive semirings, the classical 0-1 law implies that every first-order sentence is, asymptotically, either almost surely evaluated to 0 by random semiring interpretations, or almost surely takes only values different from 0. However, by means of a more sophisticated analysis, based on appropriate extension properties and on algebraic representations of first-order formulae, we can prove much stronger results.

For many semirings K, the first-order sentences can be partitioned into classes F(k), such that for each element k of K, every sentence in F(k) evaluates almost surely to k under random semiring interpretations. Further, for finite or infinite lattice semirings, this partition actually collapses to just three classes F(0), F(1) and F(e) of sentences that, respectively, almost surely evaluate to 0, 1, and to the smallest positive value e. For all other values k in K we have that F(k) is empty. The problem of computing the almost sure valuation of a first-order sentence on finite lattice semirings is PSPACE-complete.

14:45
Geometric decision procedures and the VC dimension of linear arithmetic theories
PRESENTER: Alessio Mansutti

ABSTRACT. This paper resolves two open problems on linear integer arithmetic(LIA), also known as Presburger arithmetic. First, we give a triply exponential geometric decision procedure for LIA, i.e., a procedure based on manipulating semilinear sets. This matches the runningtime of the best quantifier elimination and automata-based procedures. Second, building upon our first result, we give a doubly exponential upper bound on the Vapnik–Chervonenkis (VC) dimension of sets definable in LIA, proving a conjecture of D. Nguyen and I. Pak [Combinatorica 39, pp. 923–932, 2019].

These results partially rely on an analysis of sets definable in linear real arithmetic (LRA), and analogous results for LRA are also obtained. At the core of these developments are new decomposition results for semilinear and R-semilinear sets, the latter being the sets definable in LRA. These results yield new algorithms to compute the complement of (R-)semilinear sets that do not cause a non-elementary blowup when repeatedly combined with procedures for other Boolean operations and projection. The existence of such an algorithm for semilinear sets has been a long-standing open problem.

15:00
A direct computational interpretation of second-order arithmetic via update recursion

ABSTRACT. Second-order arithmetic has two kinds of computational interpretations: via Spector's bar recursion of via Girard's polymorphic lambda-calculus. Bar recursion interprets the negative translation of the axiom of choice which, combined with an interpretation of the negative translation of the excluded middle, gives a computational interpretation of the negative translation of the axiom scheme of specification. It is then possible to instantiate universally quantified sets with arbitrary formulas (second-order elimination). On the other hand, polymorphic lambda-calculus interprets directly second-order elimination by means of polymorphic types. The present work aims at bridging the gap between these two interpretations by interpreting directly second-order elimination through update recursion, which is a variant of bar recursion.

15:15
When Locality Meets Preservation

ABSTRACT. This paper investigates the expressiveness of a fragment of first-order sentences in Gaifman normal form, namely the positive Boolean combinations of basic local sentences. We show that they match exactly the first-order sentences preserved under local elementary embeddings, thus providing a new general preservation theorem and extending the Łós-Tarski Theorem. This full preservation result fails as usual in the finite, and we show furthermore that the naturally related decision problems are undecidable. In the more restricted case of preservation under extensions, it nevertheless yields new well-behaved classes of finite structures: we show that preservation under extensions holds if and only if it holds locally.

14:00-15:30 Session 67H: SAT / PBS (SAT)
Chair:
14:00
Introducing Intel(R) SAT Solver

ABSTRACT. We introduce Intel(R) SAT Solver (IntelSAT) -- a new open-source CDCL SAT solver, written from scratch. IntelSAT is optimized for applications which generate many mostly satisfiable incremental SAT queries. We apply the following Incremental Lazy Backtracking (ILB) principle: in-between incremental queries, backtrack only when necessary and to the highest possible decision level. ILB is enabled by a novel reimplication procedure, which can reimply an assigned literal at a lower level without backtracking. Reimplication also helped us to restore the following two properties, lost in the modern solvers with the introduction of chronological backtracking: no assigned literal can be implied at a lower level, conflict analysis always starts with a clause falsified at the lowest possible level. In addition, we apply some new heuristics. Integrating IntelSAT into the MaxSAT solver TT-Open-WBO-Inc resulted in a significant performance boost on incomplete unweighted MaxSAT Evaluation benchmarks and improved the state-of-the-art in anytime unweighted MaxSAT solving.

14:30
Improvements to the Implicit Hitting Set Approach to Pseudo-Boolean Optimization
PRESENTER: Matti Järvisalo

ABSTRACT. The development of practical approaches for efficiently reasoning over pseudo-Boolean constraints has recently received increasing attention as a natural generalization of SAT solving. Analogously, solvers for seudo-Boolean optimization draw inspiration form solving techniques developed for maximum satisfiability (MaxSAT) solving. Recently, first practical solver lifting the implicit hitting set (IHS) approach---one of the most efficient approaches in modern MaxSAT solving---to the realm of PBO was developed, employing a PB solver as a core extractor together with an integer programming solver as a hitting set solver. In this work, we make practical improvements to the IHS approach to PBO. We propose the integration of solution-improving search to the PBO-IHS approach, resulting in a hybrid approach to PBO making use of both types of search towards an optimal solution. Furthermore, we explore the potential of variations of core extraction within PBO-IHS---including recent advances in PB core extraction, allowing for extracting more general PB constraints compared to the at-least-one constraints typically relied on in IHS---in speeding up PBO-IHS search. We validate these ideas in the realm of PBO-IHS by showing that the empirical efficiency of PBO-IHS---recently shown to outperform other specialized PBO solvers---is further improved by the integration of these techniques.

15:00
Certified CNF Translations for Pseudo-Boolean Solving
PRESENTER: Andy Oertel

ABSTRACT. The dramatic improvements in Boolean satisfiability (SAT) solving since the turn of the millennium have made it possible to leverage state-of-the-art conflict-driven clause learning (CDCL) solvers for many combinatorial problems in academia and industry, and the use of proof logging has played a crucial role in increasing the confidence that the results these solvers produce are correct. However, the conjunctive normal form (CNF) format used for SAT proof logging means that it has not been possible to extend guarantees of correctness to the use of SAT solvers for more expressive combinatorial paradigms, where the first step is to translate the input to CNF. In this work, we show how cutting-planes-based reasoning can provide proof logging for solvers that translate pseudo-Boolean (a.k.a. 0-1 integer linear) decision problems to CNF and then run CDCL. To support a wide range of encodings, we provide a uniform and easily extensible framework for proof logging of CNF translations. We are hopeful that this is just a first step towards providing a unified proof logging approach that will also extend to maximum satisfiability (MaxSAT) solving and pseudo-Boolean optimization in general.

14:30-15:00 Session 68: 10-Year Test-of-Time Award (Torsten Schaub, Max Ostrowski): From "ASP modulo CSP" to "ASP modulo X", or how clingcon paved the way for clingo[X] systems (ICLP)

From "ASP modulo CSP" to "ASP modulo X", or how clingcon paved the way for clingo[X] systems

Abstract: Clingcon extends the ASP system Clingo with linear constraints over integers. As such, its series of variants served as a consecutive design study onhow to extend ASP systems with foreign inferences. Meanwhile this has culminated in the generic theory reasoning framework of the ASP system clingo.We will sketch the evolution of this technology, look at its current state, and emphasis the role of semantics in view of a successful outcome.

 

Location: Taub 9
15:00-15:30 Session 69A: Complexity Theory and Logic (FSCD)
Chair:
15:00
mwp-Analysis Improvement and Implementation: Realizing Implicit Computational Complexity
PRESENTER: Thomas Seiller

ABSTRACT. Implicit Computational Complexity (ICC) drives better understanding of complexity classes, but it also guides the development of resources-aware languages and static source code analyzers. Among the methods developed, the mwp-flow analysis certifies polynomial bounds on the size of the values manipulated by an imperative program. This result is obtained by bounding the transitions between states instead of focusing on states in isolation, as most static analyzers do, and is not concerned with termination or tight bounds on values. Those differences, along with its built-in compositionality, make the mwp-flow analysis a good target for determining how ICC-inspired techniques diverge compared with more traditional static analysis methods. This paper's contributions are three-fold: we fine-tune the internal machinery of the original analysis to make it tractable in practice; we extend the analysis to function calls and leverage its machinery to compute the result of the analysis efficiently; and we implement the resulting analysis as a lightweight tool to automatically perform data-size analysis of C programs. This documented effort prepares and enables the development of certified complexity analysis, by transforming a costly analysis into a tractable program, that furthermore decorrelates the problem of deciding if a bound exist with the problem of computing it.

15:00-15:30 Session 69B: Semantics (ICLP)
Location: Taub 9
15:00
ASP-Based Declarative Process Mining

ABSTRACT. We propose Answer Set Programming (ASP) as an approach for modeling and solving problems from the area of Declarative Process Mining (DPM). We consider here three classical problems, namely, Log Generation, Conformance Checking, and Query Checking. These problems are addressed from both a control-flow and a data-aware perspective. The approach is based on the representation of process specifications as (finite-state) automata. Since these are strictly more expressive than the defacto DPM standard specification language DECLARE, more general specifications than those typical of DPM can be handled, such as formulas in linear-time temporal logic over finite traces.

15:15
Implementing Stable-Unstable Semantics with ASPTOOLS and Clingo

ABSTRACT. Normal logic programs subject to stable model semantics cover reasoning problems from the first level of polynomial time hierarchy (PH) in a natural way. Disjunctive programs reach one level beyond this, but the access to the underlying NP oracle(s) is somewhat implicit and available for the programmer using the so-called saturation technique. To address this shortcoming, stable-unstable semantics was proposed, making oracles explicit as subprograms having no stable models. If this idea is applied recursively, any level of PH can be reached with normal programs only, in analogy to quantified Boolean formulas (QBFs). However, for the moment, no native implementations of stable-unstable semantics have emerged except via translations toward QBFs. In this work, we alleviate this situation with a translation of (effectively) normal programs that combines a main program with any fixed number of oracles subject to stable-unstable semantics. The result is a disjunctive program that can be fed as input for answer set solvers supporting disjunctive programs. The idea is to hide saturation from the programmer altogether, although it is exploited by the translation internally. The translation of oracles is performed using translators and linkers from the ASPTOOLS collection while Clingo is used as the back-end solver.

15:30-16:00Coffee Break
16:00-17:00 Session 70: Plenary (FLoC)
16:00
Complexity Measures for Reactive Systems

ABSTRACT. A reactive system maintains an on-goint interaction with its environment. At each moment in time, the system receives from the environment an assignment to the input signals and responds with an assignment to the output signals. The system is correct with respect to a given specification if, for all environments, the infinite computation that is generated by the interaction between the system and the environment satisfies the specification. Reactive systems are modeled by transducers: finite state machines whose transitions are labeled by assignments to the input signals and whose states are labeled by assignments to the output signals. In the synthesis problem, we automatically transform a given specification into a correct transducer.

While complexity measures receive much attention in the design of algorithms, they are less explored in the context of synthesis. This is a serious drawback: just as we cannot imagine an algorithm designer that accepts a correct algorithm without checking its complexity, we cannot expect designers of reactive systems to accept synthesized systems that are only guaranteed to be correct. It is not clear, however, how to define the complexity of a transducer. Unlike the case of algorithms (or Turing machines in general), there is no "size of input" to relate to, and measures like time and space complexity are irrelevant. Indeed, we care for on-going interactions, along which the transducer reacts instantly according to its transition function. One immediate complexity measure for a transducer is its size, but many more complexity measures are of interest. The talk discusses such measures and describes how the search for optimal reactive systems affects the synthesis problem.

17:00-17:30 Piano concert (KR)

Program:

  • Mozart Sonata n°3 K. 281 B-flat major
  • Liszt Rhapsody n°7
  • Gerschwin Three preludes

Bio: François Schwarzentruber is associate professor at école normale supérieure de Rennes. He took piano lessons by Suzanne Muller-Gunst. He performed Beethoven's piano concertos n° 3 and 4 with the orchestra of institut national polytechnique (INP) of Toulouse,Rhapsody in Blue of Gershwin together with the university orchestra of Rennes. He gave several concerts, in the public library Champs Libres in Rennes, in retirement houses, but also in festivals. He won the "special Mozart and "Amateurs virtuoses" awards at International Ile-de-France Piano Competition (amateurs). He also composes, mostly short pieces for piano.