previous day
next day
all days

View: session overviewtalk overview

08:30-09:00Coffee & Refreshments
09:00-10:30 Session 44A: ML (CP)
Location: Taub 7
Learning MAX-SAT Models from Examples using Genetic Algorithms and Knowledge Compilation
PRESENTER: Senne Berden

ABSTRACT. Many real-world problems can be effectively solved by means of combinatorial optimization. However, appropriate models to give to a solver are not always available, and sometimes must be learned from historical data. Although some research has been done in this area, the task of learning (weighted partial) MAX-SAT models has not received much attention thus far, even though such models can be used in many real-world applications. Furthermore, most existing work is limited to learning models from non-contextual data, where instances are labeled as solutions and non-solutions, but without any specification of the contexts in which those labels apply. A recent approach named HASSLE-SLS has addressed these limitations: it can jointly learn hard constraints and weighted soft constraints from labeled contextual examples. However, it is hindered by long runtimes, as evaluating even a single candidate MAX-SAT model requires solving as many models as there are contexts in the training data, which quickly becomes highly expensive when the size of the model increases. In this work, we address these runtime issues. To this end, we make two contributions. First, we propose a faster model evaluation procedure that makes use of knowledge compilation. Second, we propose a genetic algorithm named HASSLE-GEN that decreases the number of evaluations needed to find good models. We experimentally show that both contributions improve on the state of the art by speeding up learning, which in turn allows higher-quality MAX-SAT models to be found within a given learning time budget.

Selecting SAT Encodings for Pseudo-Boolean and Linear Integer Constraints

ABSTRACT. Many constraint satisfaction and optimisation problems can be solved effectively by encoding them as instances of the Boolean Satisfiability problem (SAT). However, even the simplest types of constraints have many encodings in the literature with widely varying performance, and the problem of selecting suitable encodings for a given problem instance is not trivial. We explore the problem of selecting encodings for pseudo-Boolean and linear constraints using a supervised machine learning approach. We show that it is possible to select encodings effectively using a standard set of features for constraint problems; however we obtain better performance with a new set of features specifically designed for the pseudo-Boolean and linear constraints. In fact, we achieve good results when selecting encodings for unseen problem classes. Our results compare favourably to AutoFolio when using the same feature set. We discuss the relative importance of instance features to the task of selecting the best encodings, and compare several variations of the machine learning method.

Improved Sample Complexity Bounds for Branch-and-Cut
PRESENTER: Siddharth Prasad

ABSTRACT. The branch-and-cut algorithm for integer programming has a wide variety of tunable parameters that have a huge impact on its performance, but which are challenging to tune by hand. An increasingly popular approach is to use machine learning to configure these parameters based on a training set of integer programs from the application domain. We bound how large the training set should be to ensure that for any configuration, its average performance over the training set is close to its expected future performance. Our guarantees apply to parameters that control the most important aspects of branch-and-cut: node selection, branching constraint selection, and cut selection, and are sharper and more general than those from prior research.

09:00-10:30 Session 44B: Theory (CP)
Location: Taub 4
CSP Beyond Tractable Constraint Languages
PRESENTER: Stefan Szeider

ABSTRACT. The constraint satisfaction problem (CSP) is among the most studied computational problems. While NP-hard, many tractable subproblems have been identified (Bulatov 2017, Zuk 2017). Backdoors, introduced by Williams Gomes and Selman (2003), gradually extend such a tractable class to all CSP instances of bounded distance to the class. Backdoor size provides a natural but rather crude distance measure between a CSP instance and a tractable class. Backdoor depth, introduced by Mählmann, Siebertz, and Vigny (2021) for SAT, is a more refined distance measure, which admits the parallel utilization of different backdoor variables. Bounded backdoor size implies bounded backdoor depth, but there are instances of constant backdoor depth and arbitrarily large backdoor size. Dreier, Ordyniak, and Szeider (2022) provided fixed-parameter algorithms for finding backdoors of small depth into Horn and 2CNF.

In this paper, we consider backdoor depth for CSP. We consider backdoors w.r.t. tractable subproblems $C_\Gamma$ of the CSP defined by a constraint language $\Gamma$, i.e., where all the constraints use relations from the language $\Gamma$. Building upon Dreier et al.’s game-theoretic approach and their notion of separator obstructions, we show that for any finite, tractable, semi-conservative constraint language $\Gamma$, the CSP is fixed-parameter tractable parameterized by the backdoor depth into $\Gamma$ plus the domain size.

With backdoors of low depth, we reach classes of instances that require backdoors of arbitrary large size. Hence, our results strictly generalize several known results for CSP that are based on backdoor size.

On Redundancy in Constraint Satisfaction Problems

ABSTRACT. A constraint language G has non-redundancy f(n) if every instance of CSP(G) with n variables contains at most f(n) non-redundant constraints. If G has maximum arity r then it has non-redundancy O(n^r), but there are notable examples for which this upper bound is far from the best possible. In general, the non-redundancy of constraint languages is poorly understood and little is known beyond the trivial bounds Omega(n) and O(n^r).

In this paper, we introduce an elementary algebraic framework dedicated to the analysis of the non-redundancy of constraint languages. This framework relates redundancy-preserving reductions between constraint languages to closure operators known as pattern partial polymorphisms, which can be interpreted as generic mechanisms to generate redundant constraints in CSP instances. We illustrate the power of this framework by deriving a simple characterisation of languages of arity r having non-redundancy Theta(n^r).

Fixed-Template Promise Model Checking Problems
PRESENTER: Kristina Asimi

ABSTRACT. The fixed-template constraint satisfaction problem (CSP) can be seen as the problem of deciding whether a given primitive positive first-order sentence is true in a fixed structure (also called model). We study a class of problems that generalizes the CSP simultaneously in two directions: we fix a set L of quantifiers and Boolean connectives, and we specify two versions of each constraint, one strong and one weak. Given a sentence which only uses symbols from L, the task is to distinguish whether the sentence is true in the strong sense, or it is false even in the weak sense. We classify the computational complexity of these problems for the existential positive equality-free fragment of the first-order logic, i.e., L={∃,∧,∨}, and we prove some upper and lower bounds for the positive equality-free fragment, L={∃,∀,∧,∨}. The partial results are sufficient, e.g., for all extensions of the latter fragment.

09:00-10:30 Session 44F: Temporal and Data Logic, Linear Recurrences and Equation Systems (LICS)

"Temporal and Data Logic, Linear Recurrences and Equation Systems": 6 papers (12 min presentation + 2-3 min Q&A)

Location: Taub 1
Temporal Team Semantics Revisited

ABSTRACT. In this paper, we study a novel approach to asynchronous hyperproperties by reconsidering the foundations of temporal team semantics. We consider three logics: TeamLTL, TeamCTL and TeamCTL*, which are obtained by adding quantification over so-called time evaluation functions controlling the asynchronous progress of traces. We then relate synchronous TeamLTL to our new logics and show how it can be embedded into them. We show that the model checking problem for existential TeamCTL with Boolean disjunctions is highly undecidable by encoding recurrent computations of non-deterministic 2-counter machines. Finally, we present a translation of TeamCTL* to Alternating Asynchronous Büchi Automata and obtain decidability results for the path checking problem as well as restricted variants of the model checking and satisfiability problems.

Deciding Hyperproperties Combined with Functional Specifications
PRESENTER: Jana Hofmann

ABSTRACT. We study satisfiability for HyperLTL with a ∀∗∃∗ quantifier prefix, known to be highly undecidable in general. HyperLTL can express system properties that relate multiple traces (so-called hyperproperties), which are often combined with trace properties that specify functional behavior on single traces. Following this conceptual split, we first define several safety and liveness fragments of ∀∗∃∗ HyperLTL, and characterize the complexity of their (often much easier) satisfiability problem. We then add LTL trace properties as functional specifications. Though (highly) undecidable in many cases, this way of combining “simple” HyperLTL and arbitrary LTL also leads to interesting new decidable fragments. This systematic study of ∀∗∃∗ fragments is complemented by a new algorithm for ∀∃ ∗ -HyperLTL satisfiability, which is incomplete but often successful.

Reasoning on Data Words over Numeric Domains

ABSTRACT. We introduce parametric semilinear data logic (pSDL) for reasoning about data words with numeric data. The logic allows parameters, and Presburger guards on the data and on the Parikh image of equivalence classes (i.e. data counting), allowing us to capture data languages like: (1) each data value occurs at most once in the word and is an even number, (2) the subset of the positions containing data values divisible by 4 has the same number of a's and b's, (3) the data value with the highest frequency in the word is divisible by 3, and (4) each data value occurs at most once, and the set of data values forms an interval. We provide decidability and complexity results for the problem of membership and satisfiability checking over these models. In contrast to two-variable logic of data words and data automata (which also permit a form of data counting but no arithmetics over numeric domains and have incomparable inexpressivity), pSDL has elementary complexity of satisfiability checking. We show interesting potential applications of our models in databases and verification.

Solvability of orbit-finite systems of linear equations
PRESENTER: Sławomir Lasota

ABSTRACT. We study orbit-finite systems of linear equations, in the setting of sets with atoms. Our principal contribution is a decision procedure for solvability of such systems. The procedure works for every field (and even commutative ring) under mild effectiveness assumptions, and reduces a given orbit-finite system to a number of finite ones: exponentially many in general, but polynomially many when atom dimension of input systems is fixed. Towards obtaining the procedure we push further the theory of vector spaces generated by orbit-finite sets, and show that each such vector space admits an orbit-finite basis. This fundamental property is a key tool in our development, but should be also of wider interest.

Computing the Density of the Positivity Set for Linear Recurrence Sequences

ABSTRACT. The set of indices that correspond to the positive entries of a sequence of numbers is called its positivity set. In this paper, we study the density of the positivity set of a given linear recurrence sequence, that is the question of how much more frequent are the positive entries compared to the non- positive ones. We show that one can compute this density to arbitrary precision, as well as decide whether it is equal to zero (or one). If the sequence is diagonalisable, we prove that its positivity set is finite if and only if its density is zero. Lastly, arithmetic properties of densities are treated, in particular we prove that it is decidable whether the density is a rational number, given that the recurrence sequence has at most one pair of dominant complex roots.

On the Skolem Problem and the Skolem Conjecture
PRESENTER: James Worrell

ABSTRACT. It is a longstanding open problem whether there is an algorithm to decide the Skolem Problem for linear recurrence sequences (LRS) over the integers, namely whether a given such sequence $\langle u_n\rangle_{n=0}^\infty$ has a zero term (i.e., whether $u_n=0$ for some $n$). A major breakthrough in the early 1980s established decidability for LRS of order four or less, i.e., for LRS in which every new term depends linearly on the previous four (or fewer) terms. The Skolem Problem for LRS of order $5$ or more, in particular, remains a major open challenge to this day.

Our main contributions in this paper are as follows:

First, we show that the Skolem Problem is decidable for \emph{reversible} LRS of order $7$ or less. (An integer LRS $\langle u_n \rangle_{n=0}^{\infty}$ is reversible if its unique extension to a bi-infinite LRS $\langle u_n \rangle_{n=-\infty}^{\infty}$ also takes exclusively integer values; a typical example is the classical Fibonacci sequence, whose bi-infinite extension is $\langle \ldots, 5, -3, 2 , -1, 1, 0, 1, 1, 2, 3, 5, \ldots \rangle$.)

Second, assuming the \emph{Skolem Conjecture} (a central hypothesis in Diophantine analysis, also known as the \emph{Exponential Local-Global Principle}), we show that the Skolem Problem for LRS of order $5$ is decidable, and exhibit a concrete procedure for solving it.

09:00-10:30 Session 44G: QBF-1 (SAT)
Classes of Hard Formulas for QBF Resolution
PRESENTER: Olaf Beyersdorff

ABSTRACT. To date, we know only a few handcrafted quantified Boolean formulas (QBFs) that are hard for central QBF resolution systems such as Q-Res and QU-Res, and only one specific QBF family to separate Q-Res and QU-Res. Here we provide a general method to construct hard formulas for Q-Res and QU-Res. The construction uses simple propositional formulas (e.g. minimally unsatisfiable formulas) in combination with easy QBF gadgets (\Sigma_2^b formulas without constant winning strategies). This leads to a host of new hard formulas, including new classes of hard random QBFs. We further present generic constructions for formulas separating Q-Res and QU-Res, and for separating Q-Res and long-distance-Q-Res.

Should decisions in QCDCL follow prefix order?
PRESENTER: Benjamin Böhm

ABSTRACT. Quantified conflict-driven clause learning (QCDCL) is one of the main solving approaches for quantified Boolean formulas (QBF). One of the differences between QCDCL and propositional CDCL is that QCDCL typically follows the prefix order of the QBF for making decisions. We investigate an alternative model for QCDCL solving where decisions can be made in arbitrary order. The resulting system QCDCL-ANY is still sound and terminating, but does not necessarily allow to always learn asserting clauses or cubes. To address this potential drawback, we additionally introduce two subsystems that guarantee to always learn asserting clauses (QCDCL-UNI-ANY) and asserting cubes (QCDCL-EXI-ANY), respectively. We model all four approaches by formal proof systems and show that QCDCL-UNI-ANY is exponentially better than QCDCL on false formulas, whereas QCDCL-EXI-ANY is exponentially better than QCDCL on true QBFs. Technically, this involves constructing specific QBF families and showing lower and upper bounds in the respective proof systems. We complement our theoretical study with some initial experiments that confirm our theoretical findings.

Changing Partitions in Rectangle Decision Lists

ABSTRACT. Rectangle decision lists are a form of decision lists that were recently shown to have applications in the proof complexity of certain OBDD-based QBF-solvers. We consider a version of rectangle decision lists with changing partitions, which corresponds to QBF-solvers that may change the variable order of the OBDDs they produce. We show that even allowing one single partition change generally leads to exponentially more succinct decision lists. More generally, we show that there is a succinctness hierarchy: for every $k\in \mathbb{N}$, when going from $k$ partition changes to $k+1$, there are functions that can be represented exponentially more succinctly. As an application, we show a similar hierarchy for OBDD based QBF-solvers which shows that OBDD-based QBF-solvers could in principle far more efficient if changing the variable orders of the constructed OBDDs is allowed.

09:15-10:15 Session 45A: FSCD Invited Speaker (FSCD)
Cutting a Proof into Bite-Sized Chunks: Incrementally Proving Termination in Higher-Order Term Rewriting
09:15-10:30 Session 45B: Semantics (ICLP)
Location: Taub 9
Treewidth-aware Reductions of Normal ASP to SAT– Is Normal ASP Harder than SAT after All?

ABSTRACT. Answer Set Programming (ASP) is a paradigm and problem modeling and solving toolkit for KR that is often invoked. There are plenty of results dedicated to studying the hardness of (fragments of) ASP. So far, these studies resulted in characterizations in terms of computational complexity as well as in fine-grained insights presented in form of dichotomy-style results, lower bounds when translating to other formalisms like Boolean satisfiability (SAT), and even detailed parameterized complexity landscapes. A quite generic and prominent parameter in parameterized complexity originating from graph theory is the so-called treewidth, which in a sense captures structural density of a program. Recently, there was an increase in the number of treewidth-based solvers related to SAT. While there exist several translations from (normal) ASP to SAT, yet there is no reduction preserving treewidth or at least being aware of the treewidth increase. This paper deals with a novel reduction from normal ASP to SAT that is aware of the treewidth, and guarantees that a slight increase of treewidth is indeed sufficient. Then, we also establish that when considering treewidth, already the fragment of normal ASP is slightly harder than SAT (under reasonable assumptions in computational complexity). This also confirms that our reduction probably cannot be significantly improved and that the slight increase of treewidth is unavoidable.

Inference and Learning with Model Uncertainty in Probabilistic Logic Programs
PRESENTER: Victor Verreet

ABSTRACT. An issue that has received limited attention in probabilistic logic programming (PLP) is the modeling of so-called epistemic uncertainty, the uncertainty about the model itself. Accurately quantifying this uncertainty is paramount to robust inference, learning and ultimately decision making. We introduce BetaProbLog, a PLP language that can model epistemic uncertainty. BetaProbLog has sound semantics, an effective inference algorithm that combines Monte Carlo techniques with knowledge compilation, and a parameter learning algorithm.

Tractable Reasoning using Logic Programs with Intensional Concepts

ABSTRACT. Recent developments triggered by initiatives such as the Semantic Web, Linked Open Data, the Web of Things, and geographic information systems resulted in the wide and increasing availability of machine-processable data and knowledge in the form of data streams and knowledge bases. Applications building on such knowledge require reasoning with modal and intensional concepts, such as time, space, and obligations, that are defeasible. E.g., in the presence of data streams, conclusions may have to be revised due to newly arriving information. The current literature features a variety of domain-specific formalisms that allow for defeasible reasoning using specific intensional concepts. However, many of these formalisms are computationally intractable and limited to one of the mentioned application domains. In this paper, we define a general method for obtaining defeasible inferences over intensional concepts, and we study conditions under which these inferences are computable in polynomial time.

Towards Dynamic Consistency Checking in Goal-directed Predicate Answer Set Programming
PRESENTER: Joaquin Arias

ABSTRACT. Goal-directed evaluation of Answer Set Programs is gaining traction thanks to its amenability to create AI systems that can, due to the evaluation mechanism used, generate explanations and justifications. s(CASP) is one of these systems and has been already used to write reasoning systems in several fields. It provides enhanced expressiveness w.r.t. other ASP systems due to its ability to use constraints, data structures, and unbound variables natively. However, the performance of existing s(CASP) implementations is not on par with other ASP systems: model consistency is checked once models have been generated, in keeping with the generate-and-test paradigm. In this work, we present a variation of the top-down evaluation strategy, termed Dynamic Consistency Checking, which interleaves model generation and consistency checking. This makes it possible to determine when a literal is not compatible with the denials associated to the global constraints in the program, prune the current execution branch, and choose a different alternative. This strategy is specially (but not exclusively) relevant in problems with a high combinatorial component. We have experimentally observed speedups of up to 90x w.r.t. the standard versions of s(CASP).

ApproxASP – A Scalable Approximate Answer Set Counter
PRESENTER: Mohimenul Kabir

ABSTRACT. Answer Set Programming (ASP) is a framework in artificial intelligence and knowledge representation for declarative modeling and problem solving. Modern ASP solvers focus on the computation or enumeration of answer sets. However, a variety of probabilistic applications in reasoning or logic programming require counting answer sets. While counting can be done by enumeration, simple enumeration becomes immediately infeasible if the number of solutions is high. On the other hand, approaches to exact counting are of high worst-case complexity. In fact, in propositional model counting, exact counting becomes impractical. In this work, we present a scalable approach to approximate counting for ASP. Our approach is based on systematically adding parity (XOR) constraints to ASP programs, which divide the search space. We prove that adding random XOR constraints partitions the answer sets of an ASP program. In practice, we use a Gaussian elimination based approach by lifting ideas from SAT to ASP and integrate it into a state of the art ASP solver, which we call ApproxASP. Finally, our experimental evaluation shows the scalability of our approach over existing ASP systems.

09:30-10:30 Session 46: Invited Talk (KR)
Location: Taub 2
Neuro-Symbolic Adventures on Commonsense Knowledge and Reasoning

ABSTRACT. Neural language models, as they grow in scale, continue to surprise us with utterly nonsensical and counterintuitive errors despite their otherwise remarkable performances on numerous leaderboards. In this talk, I will argue that it is time to challenge the currently dominant paradigm of task-specific supervision built on top of large-scale self-supervised neural networks. First, I will highlight the importance of unsupervised, inference-time algorithms that can make significantly better lemonades out of off-the-shelf neural language models via flexible differentiable reasoning and discrete inference with predicate logic. Next, I will highlight the importance of melding explicit and declarative knowledge encoded in symbolic knowledge graphs with implicit and observed knowledge encoded in neural language models, with newest updates on ATOMIC 10x and distilled COMET, demonstrating a machine-authored KB that wins, for the first time, over a human-authored KB in all criteria: scale, accuracy, and diversity.

10:30-11:00Coffee Break
11:10-12:10 Session 48: Keynote (FLoC)
Information Structures for Privacy and Fairness

ABSTRACT. The increasingly pervasive use of big data and machine learning is raising various ethical issues, in particular privacy and fairness.In this talk, I will discuss some frameworks to understand and mitigate the issues, focusing on iterative methods coming from information theory and statistics.In the area of privacy protection, differential privacy (DP) and its variants are the most successful approaches to date. One of the fundamental issues of DP is how to reconcile the loss of information that it implies with the need to pr eserve the utility of the data. In this regard, a useful tool to recover utility is the Iterative Bayesian Update (IBU), an instance of the famous Expectation-Maximization method from Statistics. I will show that the IBU, combined with the metric version of DP, outperforms the state-of-the art, which is based on algebraic methods combined with the Randomized Response mechanism, widely adopted by the Big Tech industry (Google, Apple, Amazon, ...). Furthermore I will discuss a surprising duality between the IBU and one of the methods used to enhance metric DP, that is the Blahut-Arimoto algorithm from Rate-Distortion Theory. Finally, I will discuss the issue of biased decisions in machine learning, and will show that the IBU can be applied also in this domain to ensure a fairer treatment of disadvantaged groups.


Brief Bio:Catuscia Palamidessi is Director of Research at INRIA Saclay (since 2002), where she leads the team COMETE. She has been Full Professor at the University of Genova, Italy (1994-1997) and Penn State University, USA (1998-2002). Palamidessi's research interests include Privacy, Machine Learning, Fairness, Secure Information Flow, Formal Methods, and Concurrency. In 2019 she has obtained an ERC advanced grant to conduct research on Privacy and Machine Learning. She has been PC chair of various conferences including LICS and ICALP, and PC member of more than 120 international conferences. She is in the Editorial board of several journals, including the IEEE Transactions in Dependable and Secure Computing, Mathematical Structures in Computer Science, Theoretics, the Journal of Logical and Algebraic Methods in Programming and Acta Informatica. She is serving in the Executive Committee of ACM SIGLOG, CONCUR, and CSL.


12:10-12:40 Session 49A: Epistemic Logic (KR)
Location: Taub 2
The Topology of Surprise

ABSTRACT. Knowability has a topological interpretation, where a proposition iff it is true in a neighbourhood of the evaluation point. In this paper we show how the Cantor derivative and the topological "perfect core", i.e., the largest dense-in-itself subspace of a topological space, provide a more refined topological model of learnability. In particular, we use these notions to elucidate the surprise exam paradox and similar epistemic puzzles. We then provide a complete deductive calculus and prove PSPACE-completeness of the resulting modal logic.

12:10-12:40 Session 49B: Knowledge Graphs (KR)
Location: Taub 3
Open Relation Extraction With Non-existent and Multi-span Relationships
PRESENTER: Huifan Yang

ABSTRACT. Open relation extraction (ORE) aims to assign semantic relationships among arguments, essential to the automatic construction of knowledge graphs (KG). The previous ORE methods and some benchmark datasets consider a relation between two arguments as definitely existing and in a simple single-span form, neglecting possible non-existent relationships and flexible, expressive multi-span relations. However, detecting non-existent relations is necessary for a pipelined information extraction system (first performing named entity recognition then relation extraction), and multi-span relationships contribute to the diversity of connections in KGs. To fulfill the practical demands of ORE, we design a novel Query-based Multi-head Open Relation Extractor (QuORE) to extract single/multi-span relations and detect non-existent relationships effectively. Moreover, we re-construct some public datasets covering English and Chinese to derive augmented and multi-span relation tuples. Extensive experiment results show that our method outperforms the state-of-the-art ORE model LOREM in the extraction of existing single/multi-span relations and the overall performances on four datasets with non-existent relationships. Our code and data are publicly available.

12:10-12:40 Session 49C: Application of SAT to Quantum circuits (SAT)
Towards a SAT Encoding for Quantum Circuits
PRESENTER: Robert Wille

ABSTRACT. Satisfiability Testing (SAT) techniques are well-established in classical computing where they are used to solve a broad variety of problems, e.g., in the design of classical circuits and systems. Analogous to the classical realm, quantum algorithms are usually modelled as circuits and similar design tasks need to be tackled. Thus, it is natural to pose the question whether these design tasks in the quantum realm can also be approached using SAT techniques. To the best of our knowledge, no SAT formulation for arbitrary quantum circuits exists and it is unknown whether such an approach is feasible at all. In this work, we define a propositional SAT encoding that, in principle, can be applied to arbitrary quantum circuits. However, we show that due to the inherent complexity of representing quantum states, constructing such an encoding is not feasible in general. Therefore, we establish general criteria for determining the feasibility of the proposed encoding and identify classes of quantum circuits fulfilling these criteria. We explicitly demonstrate how the proposed encoding can be applied to the class of Clifford circuits as a representative. Finally, we empirically demonstrate the applicability and efficiency of the proposed encoding for Clifford circuits. With these results, we lay the foundation for continuing the ongoing success of SAT in classical circuit and systems design for quantum circuits.

12:30-14:00Lunch Break

Lunch will be held in Taub lobby (CP, LICS, ICLP) and in The Grand Water Research Institute (KR, FSCD, SAT).

14:00-15:30 Session 50A: Tutorial (CP)
Location: Taub 7
Solving with Provably Correct Results: Beyond Satisfiability, and Towards Constraint Programming
PRESENTER: Ciaran McCreesh

ABSTRACT. Nowadays, Boolean Satisfiability (SAT) solvers aren't allowed to just assert that an instance is unsatisfiable: they're expected to also produce an independently verifiable proof that backs up this claim. This so-called proof logging has done wonders for solver reliability, and has also helped with social acceptability of computer-generated mathematical results. What if we could do the same for constraint programming, producing an auditable solving process where people whose lives or livelihoods are affected by a computer's decision would no longer have to resort to hoping that the system is free of bugs?

We will start this tutorial by explaining what a proof really is, and what it means for an algorithm to certify the correctness of its answers by using proof logging. We will give a brief overview of the (extended) resolution and DRAT proof systems used in SAT proof logging. Then we will look at what is needed to bring proof logging to a broader range of solving algorithms, starting with some subgraph-finding algorithms, and moving towards a full CP solver with multiple global constraints and even some reformulation. We will show, by example, what you need to do to use this technology in your own algorithms, and how this relates to the underlying proof methods. We'll finish by discussing how proof logging can also support advanced solving techniques such as symmetry breaking. Surprisingly, all of this can be done in a simple proof format known as "cutting planes with strengthening" that does not know anything about non-binary variables, global constraints, graphs, groups, or symmetries.


14:00-15:30 Session 50B: Applications (CP)
Location: Taub 4
Trajectory Optimization for Safe Navigation in Maritime Traffic Using Historical Data

ABSTRACT. Increasing maritime trade often results in congestion in busy ports, thereby necessitating planning methods to avoid close quarter situations among vessels for safer navigation. Rapid digitization and automation of port operations and vessel navigation provide unique opportunities for significantly improving the navigation safety. Our key contributions are as follows. \textit{First}, given a set of future candidate trajectories for vessels involved in a traffic hotspot zone, we develop a multiagent trajectory optimization method to choose trajectories that result in the best overall close quarter risk reduction. Our novel MILP-based trajectory optimization method is more than an order-of-magnitude faster than a standard MILP for this problem, and runs in near real-time. \textit{Second}, although automation has improved in maritime traffic, current vessel traffic systems (in our case study of a busy Asian port) predict only a \textit{single} future trajectory of a vessel based on linear extrapolation. Therefore, using historical data, we learn a \textit{generative model} that predicts \textit{multiple} possible future trajectories of each vessel in a given traffic hotspot area, reflecting past movement patterns of vessels, and integrate it with our trajectory optimizer. \textit{Third}, we validate both our trajectory optimization and generative model extensively using a real world maritime traffic dataset containing 1.7 million trajectories over 1.5 years from one of the busiest ports in the world demonstrating effective risk reduction.

A Constraint Programming Approach to Ship Refit Project Scheduling

ABSTRACT. Ship refit projects require ongoing plan management to adapt to arising work and disruptions. Planners must sequence work activities in the best order possible to complete the project in the shortest time or within a defined period while minimizing overtime costs. Activity scheduling must consider milestones, resource availability constraints, and precedence relations. We propose a constraint programming approach for detailed ship refit planning at two granularity levels, daily and hourly schedule. The problem was modeled using the cumulative global constraint, and the Solution-Based Phase Saving heuristic was used to speedup the search, thus achieving the industrialization goals. Based on the evaluation of seven realistic instances over three objectives, the heuristic strategy proved to be significantly faster to find better solutions than using a baseline search strategy. The method was integrated into Refit Optimizer, a cloud-based prototype solution that can import projects from Primavera P6 and optimize plans.

DUELMIPs: Jointly Optimizing Software Defined Network Functionality and Security
PRESENTER: Timothy Curry

ABSTRACT. Software defined networks (SDNs) define a programmable network fabric that can be reconfigured to respect global networks properties. Securing against adversaries who try to exploit the network is an objective that conflicts with providing functionality. This paper proposes a two-stage mixed-integer programming framework. The first stage automates routing decisions for the flows to be carried by the network while maximizing readability and ease of use for network engineers. The second stage is meant to quickly respond to security breaches to automatically decide on network counter-measures to block the detected adversary. Both stages are computationally challenging and the security stage leverages large neighborhood search to quickly deliver effective response strategies. The approach is evaluated on synthetic networks of various sizes and shown to be effective for both its functional and security objectives.

14:00-15:30 Session 50C: Semantics (FSCD)
A Fibrational Tale of Operational Logical Relations

ABSTRACT. Logical relations built on top of an operational semantics are one of the most successful proof methods in programming language semantics. In recent years, more and more expressive notions of operationally-based logical relations have been designed and applied to specific families of languages. However, a unifying abstract framework for operationally-based logical relations is still missing. We show how fibrations can provide a uniform treatment of operational logical relations, using as reference example a $\lambda$-calculus with generic effects endowed with a novel, abstract operational semantics defined on a large class of categories. Moreover, this abstract perspective allows us to give a solid mathematical ground also to differential logical relations --- a recently introduced notion of higher-order distance between programs --- both pure and effectful, bringing them back to a common picture with traditional ones.

On Quantitative Algebraic Higher-Order Theories
PRESENTER: Paolo Pistone

ABSTRACT. We explore the possibility of extending Mardare et al.'s quantitative algebras to the structures which naturally emerge from Combinatory Logic and the lambda-calculus. First of all, we show that the framework is indeed applicable to those structures, and give soundness and completeness results. Then, we prove some negative results which clearly delineate to which extent categories of metric spaces can be models of such theories. We conclude by giving several examples of non-trivial higher-order quantitative algebras.

Sheaf semantics of termination-insensitive noninterference

ABSTRACT. We propose a new sheaf semantics for secure information flow over a space of abstract behaviors, based on synthetic domain theory: security classes are open/closed partitions, types are sheaves, and redaction of sensitive information corresponds to restricting a sheaf to a closed subspace. Our security-aware computational model satisfies termination-insensitive noninterference automatically, and therefore constitutes an intrinsic alternative to state of the art extrinsic/relational models of noninterference. Our semantics is the latest application of Sterling and Harper's recent re-interpretation of phase distinctions and noninterference in programming languages in terms of Artin gluing and topos-theoretic open/closed modalities. Prior applications include parametricity for ML modules, the proof of normalization for cubical type theory by Sterling and Angiuli, and the cost-aware logical framework of Niu et al. In this paper we employ the phase distinction perspective twice: first to reconstruct the syntax and semantics of secure information flow as a lattice of phase distinctions between "higher" and "lower" security, and second to verify the computational adequacy of our sheaf semantics with respect to a version of Abadi et al.'s dependency core calculus to which we have added a construct for declassifying termination channels.

14:00-15:00 Session 50D: Invited talk by Theresa Swift: Two Languages, One System:Tightly Connecting XSB Prolog and Python (ICLP)

Two Languages, One System:Tightly Connecting XSB Prolog and Python

Abstract: Python, ranked first on the May 2022 Tiobe index, is a hugely popular language, heavily used in machine learning and other applications. Prolog, ranked twenty-first the May 2022 Tiobe index, while less popular has important reasoning and knowledge representation capabilities, particularly since modern Prologs support constraint-based reasoning, tabling-based deduction, and probabilistic inference. Despite their differences, Prolog and Python share important commonalities. First, both Prolog and CPython (the standard Python implementation) are written in C with well-developed interfaces to other C programs. In addition, both languages are dynamically typed with data structures that are recursively generated in just a few ways. Infact, nearly all core data structures of the two languages can be efficiently bi-translated, leading to a tight connection of the two systems. This talk presents the design, experience, and implications of such a connection using XSB Prolog version 5.0. The connection for XSB to call Python has led to XSB orchestrating commercial projects using interfaces to Elastic search, dense vector storage, nlp systems, Google maps, and to a 14.6 billion triple Wikidata graph. The connection for Python to call XSB allows XSB to be imported as any other Python module so that XSB can easily be invoked from Jupyter notebooks or other graphical interfaces. On a more speculative level, the talk mentions how this work might be leveraged for research in neuro-symbolic learning, natural language processing and cross-language type inference.

Location: Taub 9
14:00-15:30 Session 50E: Multi-Agent Systems (KR)
Location: Taub 2
Verification and Realizability in Finite-Horizon Multiagent Systems

ABSTRACT. The problems of \emph{verification} and \emph{realizability} are two central themes in the analysis of reactive systems. When multiagent systems are considered, these problems have natural analogues of existence (nonemptiness) of pure-strategy Nash equilibria and verification of pure-strategy Nash equilibria. Recently, this body of work has begun to include finite-horizon temporal goals. With finite-horizon temporal goals, there is a natural hierarchy of goal representation, ranging from deterministic finite automata (DFA), to nondeterministic finite automata (NFA), and to alternating finite automata (AFA), with a worst-case exponential gap between each successive representation. Previous works showed that the realizability problem with DFA goals was PSPACE-complete, while the realizability problem with temporal logic goals is in 2EXPTIME. In this work we study both the realizability and the verification problems with respect to various goal representations. We first show that the realizability problem for AFA goals is 2EXPTIME-complete, thus establishing a strict complexity gap between realizability with respect to DFA goals and with respect to AFA goals. We then contrast this complexity gap with the complexity of the verification problem, where we show that verification with respect to DFA goals, NFA goals, and AFA goals are all PSPACE-complete.

Towards an Enthymeme-Based Communication Framework in Multi-Agent Systems

ABSTRACT. Communication is one of the most important aspects of multi-agent systems. Among the different communication techniques applied to multi-agent systems, argumentation-based approaches have received special interest from the community, because allowing agents to exchange arguments provides a rich form of communication. In contrast to the benefits that argumentation-based techniques provide to multi-agent communication, extra weight on the communication infrastructure results from the additional information exchanged by agents, which could restrict the practical use of such techniques. In this work, we propose an argumentation framework whereby agents are able to exchange shorter messages when engaging in dialogues by omitting information that is common knowledge (e.g., information about a shared multi-agent organisation). In particular, we focus on using enthymemes, shared argumentation schemes (i.e., reasoning patterns from which arguments are instantiated), and common organisational knowledge to build an enthymeme-based communication framework. We show that our approach addresses some of Grice's maxims, in particular that agents can be brief in communication, without any loss in the content of the intended arguments.

Automatic Synthesis of Dynamic Norms for Multi-Agent Systems
PRESENTER: Giuseppe Perelli

ABSTRACT. Norms have been widely proposed to coordinate and regulate multi-agent systems (MAS) behaviour. We consider the problem of synthesising and revising the set of norms in a normative MAS to satisfy a design objective expressed in Alternating Time Temporal Logic (ATL*). ATL* is a well-established language for strategic reasoning, which allows the specification of norms that constrain the strategic behaviour of agents. We focus on dynamic norms, that is, norms corresponding to Mealy machines, that allow us to place different constraints on the agents' behaviour depending on the state of the norm and the state of the underlying MAS. We show that synthesising dynamic norms is (k + 1)-EXPTIME, where k is the alternation depth of quantifiers in the ATL* specification. Note that for typical cases of interest, k is either 1 or 2. We also study the problem of removing existing norms to satisfy a new objective, which we show to be 2EXPTIME-Complete.

14:00-14:30 Session 50F: Doctoral Consortium (KR)
Location: Taub 3
Matthias König's Thesis: Graph Parameters for Abstract Argumentation
Impact of Logic Paradigms on Abstract Argumentation
Computational Aspects of Structured Argumentation
Do Humans Find Postulates of Belief Change Plausible?
A Conditional Perspective for Reasoning, Revision and Relevance
Deontic Logic for Epistemic Actions
14:00-15:30 Session 50G: Lambda Calculus, Quantum Programming, Games in Category Theory (LICS)

"Lambda Calculus, Quantum Programming, Games in Category Theory": 6 papers (12 min presentation + 2-3 min Q&A)

Location: Taub 1
Curry and Howard Meet Borel
PRESENTER: Paolo Pistone

ABSTRACT. We show that an intuitionistic variation on counting propositional logic corresponds, in the sense of Curry and Howard, to an expressive type system for the event probabilistic lambda-calculus, itself a vehicle calculus into which both call-by-name and call-by-value evaluation of discrete randomized functional programs can be captured. Remarkably, proofs (respectively, types) do not only guarantee that validity (respectively, termination) holds, but also reveal the underlying probability. We finally show that by endowing the introduced type system with an intersection operator, one gets a system precisely capturing the probabilistic behavior of lambda-terms.

Resource approximation for the lambda-mu-calculus

ABSTRACT. The lambda-mu-calculus plays a central role in the theory of programming languages as it extends the Curry-Howard correspondence to classical logic. A major drawback is that it does not satisfy Böhm's Theorem and it lacks the corresponding notion of approximation. On the contrary, we show that Ehrhard and Regnier's Taylor expansion can be easily adapted, thus providing a resource conscious approximation theory. This produces a sensible lambda-mu-theory with which we prove some advanced properties of the lambda-mu-calculus, such as Stability and Perpendicular Lines Property, from which it follows the impossibility of parallel computations.

Graded Monads and Behavioural Equivalence Games

ABSTRACT. The framework of graded semantics uses graded monads to capture behavioural equivalences of varying granularity, for example as found on the linear-time/branching-time spectrum, over general system types. We describe a generic Spoiler-Duplicator game for graded semantics that is extracted from the given graded monad, and may be seen as playing out an equational proof; instances include standard pebble games for simulation and bisimulation as well as games for trace-like equivalences and coalgebraic behavioural equivalence. Considerations on an infinite variant of such games lead to a novel notion of infinite-depth graded semantics. Under reasonable restrictions, the infinite-depth graded semantics associated to a given graded equivalence can be characterized in terms of a determinization construction for coalgebras under the equivalence at hand.

The Pebble-Relation Comonad in Finite Model Theory
PRESENTER: Yoàv Montacute

ABSTRACT. The pebbling comonad, introduced by Abramsky, Dawar and Wang, provides a categorical interpretation for the k-pebble games from finite model theory. The coKleisli category of the pebbling comonad specifies equivalences under different fragments and extensions of infinitary k-variable logic. Moreover, the coalgebras over this pebbling comonad characterise treewidth and correspond to tree decompositions. In this paper we introduce the pebble-relation comonad, which characterises pathwidth and whose coalgebras correspond to path decompositions. We further show that the existence of a coKleisli morphism in this comonad is equivalent to truth preservation in the restricted conjunction fragment of k-variable infinitary logic. We do this using Dalmau's pebble-relation game and an equivalent all-in-one pebble game. We then provide a similar treatment to the corresponding coKleisli isomorphisms via a bijective version of the all-in-one pebble game with a hidden pebble placement. Finally, we show as a consequence a new Lovász-type theorem relating pathwidth to the restricted conjunction fragment of k-variable infinitary logic with counting quantifiers.

Quantum Expectation Transformers for Cost Analysis

ABSTRACT. We introduce a new kind of expectation transformer for a mixed classical-quantum programming language. Our semantic approach relies on a new notion of a cost structure, which we introduce and which can be seen as a specialisation of the Kegelspitzen of Keimel and Plotkin. We show that our weakest precondition analysis is both sound and adequate with respect to the operational semantics of the language. Using the induced expectation transformer, we provide formal analysis methods for the expected cost analysis and expected value analysis of classical-quantum programs. We illustrate the usefulness of our techniques by computing the expected cost of several well-known quantum algorithms and protocols, such as coin tossing, repeat until success, entangled state preparation, and quantum walks.

Quantum Weakest Preconditions for Reasoning about Expected Runtimes of Quantum Programs

ABSTRACT. We study expected runtimes for quantum programs.Inspired by recent work on probabilistic programs, we first define expected runtime as a generalisation of quantum weakest precondition. Then, we show that the expected runtime of a quantum program can be represented as the expectation of an observable (in physics). A method for computing the expected runtimes of quantum programs in finite-dimensional state spaces is developed. Several examples are provided as applications of this method, including computing the expected runtime of quantum Bernoulli Factory -- a quantum algorithm for generating random numbers. In particular, using our new method, an open problem of computing the expected runtime of quantum random walks introduced by Ambainis et al. (STOC 2001) is solved.

14:00-15:30 Session 50H: SAT (SAT)
SAT Preprocessors and Symmetry

ABSTRACT. Exploitation of symmetries is an indispensable approach to solve certain classes of difficult SAT instances. Numerous techniques for the use of symmetry in SAT have evolved over the past few decades. But no matter how symmetries are used precisely, they have to be detected first. We investigate how to detect more symmetry, faster. The initial idea is to reap the benefits of SAT preprocessing for symmetry detection. As it turns out, applying an off-the-shelf preprocessor before handling symmetry does not work: the preprocessor can haphazardly remove symmetry from formulas, severely impeding symmetry exploitation.

Our main contribution is a theoretical framework that captures the relationship of SAT preprocessing techniques and symmetry. Based on this, we create a symmetry-aware preprocessor that can be applied safely before handling symmetry. We then demonstrate that applying the preprocessor does not only substantially decrease symmetry detection and breaking times, but also uncovers hidden symmetry not detectable in the original instances. The novelty of our approach lies in the fact that we make use of SAT techniques for symmetry detection, yielding a new application-specific approach to symmetry detection for SAT.

A Comprehensive Study of k-Portfolios of Recent SAT Solvers

ABSTRACT. Hard combinatorial problems such as propositional satisfiability are ubiquitous. The holy grail are solution methods that show good performance on all problem instances. However, new approaches emerge regularly, some of which are complementary to existing solvers in that they only run faster on some instances but not on many others. While portfolios, i.e., sets of solvers, have been touted as useful, putting together such portfolios also needs to be efficient. In particular, it remains an open question how well portfolios can exploit the complementarity of solvers. This paper features a comprehensive analysis of portfolios of recent SAT solvers, the ones from the SAT Competitions 2020 and 2021. We determine optimal portfolios with exact and approximate approaches and study the impact of portfolio size k on performance. We also investigate how effective off-the-shelf prediction models are for instance-specific solver recommendations. One result is that the portfolios found with an approximate approach are as good as the optimal solution in practice. We also observe that marginal returns decrease very quickly with larger k, and our prediction models do not give way to better performance beyond very small portfolio sizes.

SAT-based Leximax Optimisation Algorithms
PRESENTER: Mikolas Janota

ABSTRACT. In several real-world problems, it is often the case that the goal is to optimise several objective functions. However, usually there is not a single optimal objective vector. Instead, there are many optimal objective vectors known as Pareto-optima. Finding all Pareto-optima is computationally expensive and the number of Pareto-optima can be too large for a user to analyse. A compromise can be made by defining an optimisation criterion that integrates all objective functions.

In this paper we propose several SAT-based algorithms to solve multi-objective optimisation problems using the leximax criterion. The leximax criterion is used to obtain a Pareto-optimal solution with a small trade-off between the objective functions, which is suitable in problems where there is an absence of priorities between the objective functions. Experimental results on the Multi-Objective Package Upgradeability Optimisation problem show that the SAT-based algorithms are able to outperform the Integer Linear Programming (ILP) approach when using non-commercial ILP solvers. Additionally, experimental results on selected instances from the MaxSAT evaluation adapted to the multi-objective domain show that our approach outperforms the ILP approach using commercial solvers.

14:30-15:00 Session 51: Doctoral Consortium (KR)
Location: Taub 3
Relevance in Reasoning with and Revision of Conditional Beliefs
Doctoral Consortium Application
On Merging of Open-Domain Ontologies
Structure-based ontology extensions
Filippo De Bortoli - Reasoning Efficiently with Description Logics that Count
Data Quality in Ontology-Based Data Access
15:00-15:30 Session 52A: Logic Programming & Machine Learning (ICLP)
Location: Taub 9
Exploiting Parameters Learning for Hyper-parameters Optimization in Deep Neural Networks
PRESENTER: Fabrizio Riguzzi

ABSTRACT. In the last years, the Hyper-parameter Optimization (HPO) research field has gained more and more attention. Many works have focused on finding the best combination of the network’s hyperparameters (HPs) or architecture. The state-of-the-art algorithm in terms of HPO is Bayesian Optimization (BO). This is because it keeps track of past results obtained during the optimization and uses this experience to build a probabilistic model mapping HPs to a probability density of the objective function. BO builds a surrogate probabilistic model of the objective function, finds the HPs values that perform best on the surrogate model and updates it with new results. In this work, a system was developed, called Symbolic DNN-Tuner which logically evaluates the results obtained from the training and the validation phase and, by applying symbolic tuning rules, fixes the network architecture, and its HPs, therefore improving performance. Symbolic DNN-Tuner improve BO applied to Deep Neural Network (DNN) by adding an analysis of the results of the network on training and validation sets. This analysis is performed by exploiting rule-based programming, and in particular by using Probabilistic Logic Programming (PLP).

Graph-based Interpretation of Normal Logic Programs
PRESENTER: Gopal Gupta

ABSTRACT. In this paper we present a dependency graph-based methods for computing the various semantics of normal logic programs. Our method employs conjunction nodes to unambiguously represent the dependency graph of normal logic programs. The dependency graph can be transformed suitably in a semantics preserving manner and retranslated into an equivalent normal logic program. This transformed normal logic program can be augmented with a few rules written in answer set programming (ASP), and then the CLINGO system used to compute its answer sets. Depending on how these additional rules are coded in ASP, one can compute models of the original normal logic program under the well-founded semantics, the stable model semantics, or the co-stable model semantics. In each case, justification for each atom in the model is also generated. We report on the implementation of our method as well as its performance evaluation.

15:00-15:35 Session 52B: Doctoral Consortium (KR)
Location: Taub 3
Decidability Limits of Finite Ontology-Mediated Query Entailment
Towards a Logical Account for Human-Aware Explanation Generation in Model Reconciliation Problems
Logics for Representation and Design of Auctions
Optimisation Methods for Complex Event Recognition
On the learnability of knowledge in language domains
Investigating Novel Representations and Deep Reinforcement Learning for General Game Playing
Data Efficient Paradigms for Personalized Assessment of Taskable AI Systems
15:30-16:00Coffee Break
16:00-17:30 Session 54A: OR (CP)
Location: Taub 7
Parallel Hybrid Best-First Search
PRESENTER: Simon de Givry

ABSTRACT. While processor frequency stagnates for two decades, the number of available cores in servers or clusters is still growing, offering the opportunity for significant speed-up in combinatorial optimization. Parallelization of exact methods remains a difficult challenge. We revisit the concept of parallel branch-and-bound in the framework of weighted constraint satisfaction problems. We show how to adapt an anytime hybrid best-first search algorithm in a master/worker protocol. The resulting parallel algorithm achieves a good load-balancing without introducing new parameters to be tuned as in the embarrassingly parallel search (EPS) approach. It has also a small overhead due to its light communication messages. We performed an experimental evaluation on several benchmarks, comparing our parallel algorithm to its sequential version. We observed linear speed-up in some cases. Our approach compared favourably to the EPS approach and also a state-of-the-art parallel exact integer programming solver.

Sequence Variables for Routing Problems

ABSTRACT. Constraint Programming (CP) is one of the most flexible approaches for modeling and solving vehicle routing problems (VRP). This paper proposes the sequence variable domain, that is inspired by the insertion graph and the subset bound domain for set variables. This domain representation, which targets VRP applications, allows for an efficient insertion-based search on a partial tour and the implementation of simple, yet efficient filtering algorithms for constraints that enforce time-windows on the visits and capacities on the vehicles. Experiment results demonstrate the efficiency and flexibility of this CP domain for solving some hard VRP problems, including the Dial A Ride, the Patient Transportation, and the asymmetric TSP with time windows.

From Cliques to Colorings and Back Again

ABSTRACT. We present an exact algorithm for graph coloring and maximum clique problems based on SAT technology. It relies on four sub-algorithms that alternatingly compute cliques of larger size and colorings with fewer colors. We show how these techniques can mutually help each other: larger cliques facilitate finding smaller colorings, which in turn can boost finding larger cliques. We evaluate our approach on the DIMACS graph coloring suite, and first show that our algorithm can improve the state-of-the-art MaxSAT-based solver IncMaxCLQ for finding maximum cliques. For the graph coloring problem, we close two open instances, decrease two upper bounds, and increase one lower bound.

16:00-17:30 Session 54B: Tutorial + ML (CP)

16:00-17:00: Tutorial

17:00-17:30: Machine Learning

Location: Taub 4
Building a Constraint Programming Solver Guided by Machine Learning: Opportunities and Challenges

ABSTRACT. In the last years, deep reinforcement learning (DRL) has shown its promise for designing good heuristics dedicated to solve NP-hard combinatorial optimization problems. Many recent papers proposed to use directly a learned heuristic to solve the problem in an end-to-end fashion. However, such an approach has two shortcomings: (1) it only provides an approximate solution with no systematic ways to improve it nor to prove optimality, (2) it is often limited to standard academic problems (e.g., the travelling salesman problem) and has difficulties to handle complex combinatorial constraints that are present in realistic problems.

Constraint programming (CP), as we known, is a generic tool to solve combinatorial optimization problems. Given enough time, its complete search procedure will always find the optimal solution. However, a critical design choice, that makes CP non-trivial to use in practice, is the branching strategy that controls how the space is explored. In this tutorial, we will show how this strategy can be delegated to a DRL agent.

First, we will present an initial proof of concept, showing that a CP solver can successfully leverage a learned heuristic. However, the implementation of this prototype suffers from severe inefficiencies issues regarding the execution time. The main reason is that existing CP solvers were not designed to handle learning components in its core engine. This brings the motivation to build a new CP solver, with a direct support for learning heuristics.

We will then present SeaPearl, a new open-source CP solver in Julia that has been designed on this purpose, together with the design choices we made. We will also highlight the different challenges that we are currently encountering and preliminary results.

Combining Reinforcement Learning and Constraint Programming for Sequence-Generation Tasks with Hard Constraints
PRESENTER: Daphné Lafleur

ABSTRACT. While Machine Learning (ML) techniques are good at generating data similar to a dataset, they lack the capacity to enforce constraints. On the other hand, any solution to a Constraint Programming (CP) model satisfies its constraints but has no obligation to imitate a dataset. Yet, we sometimes need both. In this paper we borrow RL-Tuner, a Reinforcement Learning (RL) algorithm introduced to tune neural networks, as our enabling architecture to exploit the respective strengths of ML and CP. RL-Tuner maximizes the sum of a pretrained network's learned probabilities and of manually-tuned penalties for each violated constraint. We replace the latter with outputs of a CP model representing the marginal probabilities of each note and the number of constraint violations. As was the case for the original RL-Tuner, we apply our algorithm to music generation since it is a highly-constrained domain for which CP is especially suited. We show that combining ML and CP, as opposed to using them individually, allows the agent to reflect the pretrained network while taking into account constraints, leading to melodic lines that respect both the corpus' style and the music theory constraints.

16:00-17:30 Session 54C: Unification and Matching (FSCD)
Combined Hierarchical Matching: The Regular Case

ABSTRACT. Matching algorithms are often central sub-routines in many areas of automated reasoning. They are used in areas such as functional programming, rule-based programming, automated theorem proving, and the symbolic analysis of security protocols. Matching is related to unification but provides a somewhat simplified problem. Thus, in some cases, we can obtain a matching algorithm even if the unification problem is undecidable. In this paper we consider a hierarchical approach to constructing matching algorithms. The hierarchical method has been successful for developing unification algorithms for theories defined over a constructor sub-theory. We show how the approach can be extended to matching problems which allows for the development, in a modular way, of hierarchical matching algorithms. Here we focus on regular theories, where both sides of each equational axiom have the same set of variables. We show that the combination of two hierarchical matching algorithms leads to a hierarchical matching algorithm for the union of regular theories sharing only a common constructor sub-theory.

Nominal Anti-Unification with Atom-Variables

ABSTRACT. Anti-unification is the task of generalizing a set of expressions in the most specific way. It was extended to the nominal framework by Baumgarter, Kutsia, Levy and Villaret, who defined an algorithm solving the nominal anti-unification problem, which runs in polynomial time. Unfortunately the type of the set of solutions in nominal anti-unification (with explicit atoms) is infinitary, since every solution set can be strictly refined. In this paper, we present a more general approach to nominal anti-unification that uses atom-variables instead of explicit atoms, and two variants of freshness constraints: NL-A-constraints (with atom-variables), and EQR-constraints based on Equivalence Relations on atom-variables. The idea of atom-variables is that different atom-variables may be instantiated with identical or different atoms. Albeit simple, this freedom in the formulation increases its application potential: we provide an algorithm that is finitary for the NL-A-freshness constraints, and for EQR-freshness constraints it computes a unique least general generalization. There is a price to pay in the general case: checking freshness constraints and other related logical questions will require exponential time. The setting of Baumgartner et al. is improved by the atom-only case, which runs in polynomial time and computes a unique least general generalization.

A Certified Algorithm for AC-Unification

ABSTRACT. Implementing unification modulo Associativity and Commutativity (AC) axioms is crucial in rewrite-based programming and theorem provers. We modify Stickel's seminal AC-unification algorithm to avoid mutual recursion and formalise it in the PVS proof assistant. More precisely, we prove the adjusted algorithm's termination, soundness, and completeness. To do this, we adapted Fages' termination proof, providing a unique elaborated measure that guarantees termination of the modified AC-unification algorithm. This development (to the best of our knowledge) provides the first fully formalised AC-unification algorithm.

16:00-17:30 Session 54D: Applications & Automatic Reasoning (ICLP)
Location: Taub 9
Making ProB compatible with SICStus and SWI-Prolog (Best Application Paper Award)
PRESENTER: David Geleßus

ABSTRACT. Even though the core of the Prolog programming language has been standardized by ISO since 1995, it remains difficult to write complex Prolog programs that can run unmodified on multiple Prolog implementations. Indeed, implementations sometimes deviate from the ISO standard and the standard itself fails to cover many features that are essential in practice.

Most Prolog applications thus have to rely on non-standard features, often making them dependent on one particular Prolog implementation and incompatible with others. We examine one such Prolog application: ProB, which has been developed for over 20 years in SICStus Prolog.

The article describes how we managed to refactor the codebase of ProB to also support SWI-Prolog, with the goal of verifying ProB’s results using two independent toolchains. This required a multitude of adjustments, ranging from extending the SICStus emulation in SWI-Prolog on to better modularizing the monolithic ProB codebase. We also describe notable compatibility issues and other differences that we encountered in the process, and how we were able to deal with them with few major code changes.

Building Information Modeling using Constraint Logic Programming
PRESENTER: Joaquin Arias

ABSTRACT. Building Information Modeling (BIM) produces three-dimensional object-oriented models of buildings combining the geometrical information with a wide range of properties about materials, products, safety, and so on. BIM is slowly but inevitably revolutionizing the architecture, engineering, and construction (AEC) industry. Buildings need to be compliant with regulations about stability, safety, and environmental impact. Manual compliance checking is tedious and error-prone, and amending flaws discovered only at construction time causes huge additional costs and delays. Several tools can check BIM models for conformance with rules/guidelines. For example, Singapore’s CORENET e-Submission System checks fire safety. But since the current BIM exchange format only contains basic information of building objects, a separate, ad-hoc model pre-processing is required to determine, e.g., evacuation routes. Moreover, they face difficulties in adapting existing built-in rules and/or adding new ones (to cater for building regulations, that can vary not only among countries but also among parts of the same city), if at all possible. We propose the use of logic-based executable formalisms (CLP and Constraint ASP) to couple BIM models with advanced knowledge representation and reasoning capabilities. Previous experience shows that such formalisms can be used to uniformly capture and reason with knowledge (including ambiguity) in a large variety of domains. Additionally, incorporating checking within design tools makes it possible to ensure that models are rule-compliant at every step. This also prevents erroneous designs from having to be (partially) redone, which is also costly and burdensome. To validate our proposal, we implemented a preliminary reasoner under CLP(Q/R) and ASP with constraints and evaluated it with several BIM models.

A Gaze into the Internal Logic of Graph Neural Networks, with Logic

ABSTRACT. Graph Neural Networks share with Logic Programming several key relational inference mechanisms. The datasets on which they are trained and evaluated can be seen as database facts containing ground terms. This makes possible modeling their inference mechanisms with equivalent logic programs, to better understand not just how they propagate information between the entities involved in the machine learning process but also to infer limits on what can be learned from a given dataset and how well that might generalize to unseen test data.

This leads us to the key idea of this paper: modeling with the help of a logic program the information flows involved in learning to infer from the link structure of a graph and the information content of its nodes properties of new nodes, given their known connections to nodes with possibly similar properties. The problem is known as {\em graph node property prediction} and our approach will consist in emulating with help of a Prolog program the key information propagation steps of a Graph Neural Network's training and inference stages.

We test our a approach on the {\em ogbn-arxiv} node property inference benchmark. To infer class labels for nodes representing papers in a citation network, we distill the dependency trees of the text associated to each node into directed acyclic graphs that we encode as ground Prolog terms. Together with the set of their references to other papers, they become facts in a database on which we reason with help of a Prolog program that mimics the information propagation in graph neural networks predicting node properties. In the process, we invent ground term similarity relations that help infer labels in the test set by propagating node properties from similar nodes in the training set and we evaluate their effectiveness in comparison with that of the graph's link structure. Finally, we implement explanation generators that unveil performance upper bounds inherent to the dataset.

As a practical outcome, we obtain a logic program, that, when seen as machine learning algorithm, performs close to the state of the art on the node property prediction benchmark.

Abductive Reasoning in Intuitionistic Propositional Logic via Theorem Synthesis

ABSTRACT. With help of a compact Prolog-based theorem prover for Intuitionistic Propositional Logic, we synthesize minimal assumptions under which a given formula formula becomes a theorem.

After applying our synthesis algorithm to cover basic abductive reasoning mechanisms, we synthesize conjunctions of literals that mimic rows of truth tables in classical or intermediate logics and we abduce conditional hypotheses that turn the theorems of classical or intermediate logics into theorems in intuitionistic logic. One step further, we generalize our abductive reasoning mechanism to synthesize more expressive sequent premises using a minimal set of canonical formulas, to which arbitrary formulas in the calculus can be reduced while preserving their provability.

Organized as a self-contained literate Prolog program, the paper supports interactive exploration of its content and ensures full replicability of our results.

16:00-17:30 Session 54E: Temporal Reasoning (KR)
Location: Taub 2
On the Expressive Power of Intermediate and Conditional Effects in Temporal Planning
PRESENTER: Nicola Gigante

ABSTRACT. Automated planning is the task of finding a sequence of actions that reach a desired goal, given a description of their applicability and their effects on the world. In temporal planning, actions have a duration and can overlap in time. In modern temporal planning formalisms, two features have been introduced which are very useful from a modeling perspective, but are not yet thoroughly understood: intermediate conditions and effects (ICE) and conditional effects. The expressive power of such constructs is yet not well comprehended, especially when time is dense, and no minimum separation is required between mutex events. This paper reveals that both ICE and conditional effects do not add expressive power with regards to common temporal planning formalisms. In particular, we show how they can be compiled away using a polynomial-size encoding that makes no assumptions on the time domain. This encoding advances our understanding of these features, and allow their use with simple temporal planners that lack their support. Moreover, it provides a constructive proof that temporal planning with ICE and conditional effects remains PSPACE-complete.

Unique Characterisability and Learnability of Temporal Instance Queries
PRESENTER: Yury Savateev

ABSTRACT. We aim to determine which temporal instance queries can be uniquely characterised by a (polynomial-size) set of positive and negative temporal data examples. We start by considering queries formulated in fragments of propositional linear temporal logic LTL that correspond to conjunctive queries (CQs) or extensions thereof induced by the until operator. Not all of these queries admit polynomial characterisations, but by imposing a further restriction to path-shaped queries we identify natural classes that do. We then investigate how far the obtained characterisations can be lifted to temporal knowledge graphs queried by 2D languages combining LTL with concepts in description logics EL or ELI (i.e., tree-shaped CQs). While temporal operators in the scope of description logic constructors can destroy polynomial characterisability, we obtain general transfer results for the case when description logic constructors are within the scope of temporal operators. We finally apply our characterisations to establish polynomial learnability of temporal instance queries using membership queries in the active learning framework.

A Gödel calculus for Linear Temporal Logic

ABSTRACT. We consider GTL, a variant of linear temporal logic based on Gödel-Dummett propositional logic. In recent work, we have shown this logic to enjoy natural semantics both as a fuzzy logic and as a superintuitionistic logic. Using semantical methods, the logic was shown to be PSPACE-complete. In this paper we provide a deductive calculus for GTL, and show this calculus to be sound and complete for the above-mentioned semantics.

16:00-17:30 Session 54F: Description Logics (KR)
Location: Taub 3
Counting queries over ELHI⊥ ontologies
PRESENTER: Quentin Manière

ABSTRACT. While ontology-mediated query answering most often adopts (unions of) conjunctive queries as the query language, some recent works have explored the use of counting queries coupled with DL-Lite ontologies. The aim of the present paper is to extend the study of counting queries to Horn description logics outside the DL-Lite family. Through a combination of novel techniques, adaptations of existing constructions, and new connections to closed predicates, we achieve a complete picture of the data and combined complexity of answering counting conjunctive queries (CCQs) and cardinality queries (a restricted class of CCQs) in ELHI⊥ and its various sublogics. Notably, we show that CCQ answering is 2EXP-complete in combined complexity for ELHI⊥ and every sublogic that extends EL or DL-Lite-pos-H. Our study not only provides the first results for counting queries beyond DL-Lite, but it also closes some open questions about the combined complexity of CCQ answering in DL-Lite.

Ontology-Mediated Querying on Databases of Bounded Cliquewidth
PRESENTER: Lukas Schulze

ABSTRACT. We study the evaluation of ontology-mediated queries (OMQs) on databases of bounded cliquewidth from the viewpoint of parameterized complexity theory, using as the parameter the size of the OMQ (plus the cliquewidth, in upper complexity bounds, for increased generality). As the ontology language, we use the description logics ALC and ALCI and the guarded two-variable fragment GF2 of first-order logic. Queries are atomic queries (AQs), conjunctive queries (CQs), and unions of CQs. All resulting OMQ problems are fixed-parameter linear (FPL) and we provide a careful analysis of the dependence of the running time on the parameter, exhibiting several interesting effects. For instance, GF2 with AQs requires double exponential running time unless the exponential time hypothesis (ETH) fails, while OMQ evaluation on unrestricted databases is only ExpTime-complete in this setting. For ALCI, in contrast, single exponential running time suffices. Interestingly, this is due to the lower succinctness of ALCI rather than to its lower expressive power.

Finite Entailment of UCRPQs over ALC Ontologies
PRESENTER: Albert Gutowski

ABSTRACT. We investigate the problem of finite entailment of ontology-mediated queries. We consider the expressive query language, unions of conjunctive regular path queries (UCRPQs), extending the well-known class of union of conjunctive queries, with regular expressions over roles. We look at ontologies formulated using the description logic ALC, and show a tight 2ExpTime upper bound for entailment of UCRPQs. At the core of our decision procedure, there is a novel automata-based technique introducing a stratification of interpretations induced by the deterministic finite automaton underlying the input UCRPQ.

16:00-17:30 Session 54G: Type and Category Theory (LICS)

"Type and Category Theory": 6 papers (12 min presentation + 2-3 min Q&A)

Location: Taub 1
Semantics for two-dimensional type theory
PRESENTER: Benedikt Ahrens

ABSTRACT. In this work, we propose a general notion of model for two-dimensional type theory, in the form of comprehension bicategories. Examples of comprehension bicategories are plentiful; they include interpretations of directed type theory previ- ously studied in the literature. From comprehension bicategories, we extract a core syntax, that is, judgement forms and structural inference rules, for a two-dimensional type theory. We prove soundness of the rules by giving an interpretation in any comprehension bicategory. The semantic aspects of our work are fully checked in the Coq proof assistant, based on the UniMath library. This work is the first step towards a theory of syntax and semantics for higher-dimensional directed type theory.

Normalization for Multimodal Type Theory

ABSTRACT. We prove normalization for MTT, a general multimodal dependent type theory capable of expressing modal type theories for guarded recursion, internalized parametricity, and various other prototypical modal situations. We prove that deciding type checking and conversion in MTT can be reduced to deciding the equality of modalities in the underlying modal situation, immediately yielding a type checking algorithm for all instantiations of MTT in the literature. This proof follows from a generalization of synthetic Tait computability—an abstract approach to gluing proofs—to account for modalities. This extension is based on MTT itself, so that this proof also constitutes a significant case study of MTT.

Zigzag normalisation for associative n-categories
PRESENTER: Lukas Heidemann

ABSTRACT. The theory of associative n-categories has recently been proposed as a strictly associative and unital approach to higher category theory. As a foundation for a proof assistant, this is potentially attractive, since it has the potential to allow simple formal proofs of complex high-dimensional algebraic phenomena. However, the theory relies on an implicit term normalisation procedure to recognize correct composites, with no recursive method available for computing it.

Here we describe a new approach to term normalisation in associative n-categories, based on the categorical zigzag construction. This radically simplifies the theory, and yields a recursive algorithm for normalisation, which we prove is correct. Our use of categorical lifting properties allows us to give efficient proofs of our results. Our normalisation algorithm forms a core component of a proof assistant, and we illustrate our scheme with worked examples.

Syllepsis in Homotopy Type Theory

ABSTRACT. The Eckmann-Hilton argument shows that any two monoid structures on the same set satisfying the interchange law are in fact the same operation, which is moreover commutative. When the monoids correspond to the vertical and horizontal composition of a sufficiently higher-dimensional category, the Eckmann-Hilton argument itself appears as a higher cell. This cell is often required to satisfy an additional piece of coherence, which is known as the syllepsis. We show that the syllepsis can be constructed from the elimination rule of intensional identity types in Martin-Loef type theory.

Greatest HITs: Higher inductive types in coinductive definitions via induction under clocks

ABSTRACT. We present Clocked Cubical Type Theory, the first type theory combining multi-clocked guarded recursion with the features of Cubical Type Theory. Guarded recursion is an abstract form of step-indexing, which can be used for construction of advanced programming language models. In its multi-clocked version, it can also be used for coinductive programming and reasoning, encoding productivity in types. Combining this with Higher Inductive Types (HITs) this extends to coinductive types that are traditionally hard to represent in type theory, such as the type of finitely branching labelled transition systems.

Among our technical contributions is a new principle of induction under clocks, providing computational contents to one of the main axioms required for encoding coinductive types. This principle is verified using a denotational semantics in a presheaf model.

A Type Theory for Strictly Unital Infinity-Categories

ABSTRACT. We use type-theoretic techniques to present an algebraic theory of oo-categories with strict units. Starting with a known type-theoretic presentation of fully weak oo-categories, in which terms denote valid operations, we extend the theory with a non-trivial definitional equality. This forces some operations to coincide strictly in any model, yielding the strict unit behaviour.

We make a detailed investigation of the meta-theoretic properties of this theory. We give a reduction relation that generates definitional equality, and prove that it is confluent and terminating, thus yielding the first decision procedure for equality in a strictly-unital setting. Moreover, we show that our definitional equality relation identifies all terms in a disc context, providing a point comparison with a previously proposed definition of strictly unital oo-category. We also prove a conservativity result, showing that every operation of the strictly unital theory indeed arises from a valid operation in the fully weak theory. From this, we infer that strict unitality is a property of an oo-category rather than additional structure.

16:00-17:30 Session 54H: MaxSAT (SAT)
MaxSAT-Based Bi-Objective Boolean Optimization
PRESENTER: Christoph Jabs

ABSTRACT. We explore a MaxSAT-based approach to bi-objective optimization. Bi-objective optimization refers to the task of finding so-called pareto-optimal solutions in terms of two objective functions. Bi-objective optimization problems naturally arise in various real-world settings. For example, in the context of learning interpretable representations, such as decision rules, from data, one wishes to balance between two objectives, the classification error and the size of the representation. Our approach is generally applicable to bi-objective optimizations which allow for propositional encodings. The approach makes heavy use of incremental SAT and draws inspiration from modern MaxSAT solving approaches. In particular, we describe several variants of the approach which arise from different approaches to MaxSAT solving. In addition to computing a single representative solution per each point of the pareto front, the approach allows for enumerating all pareto-optimal solutions. We empirically compare the efficiency of the approach to recent competing approaches, showing the practical benefits of our approach in the contexts of learning interpretable classification rules and bi-objective set covering.

Incremental Maximum Satisfiability
PRESENTER: Andreas Niskanen

ABSTRACT. Boolean satisfiability (SAT) solvers allow for incremental computations, which is key to efficient employment of SAT solvers iteratively for developing complex decision and optimization procedures, including maximum satisfiability (MaxSAT) solvers. However, enabling incremental computations on the level of constraint optimization remains a noticeable challenge. While incremental computations have been identified to have great potential in speeding up MaxSAT-based approaches for solving various real-world optimization problems, enabling incremental computations in MaxSAT remains to most extent unexplored. In this work, we contribute towards making incremental MaxSAT solving a reality. Firstly, building on the IPASIR interface for incremental SAT solving, we propose the IPAMIR interface for implementing incremental MaxSAT solvers and for developing applications making use of incremental MaxSAT. Secondly, we expand our recent adaptation of the implicit hitting set based MaxHS MaxSAT solver to a fully-fledged incremental MaxSAT solver in terms of implementing the IPAMIR specification in full, and detail in particular how, in addition to weight changes, assumptions are enabled without losing incrementality. Thirdly, we provide further empirical evidence on the benefits of incremental MaxSAT solving under assumptions.

Analysis of Core-Guided MaxSAT Using Cores and Correction Sets
PRESENTER: Nina Narodytska

ABSTRACT. Core-guided solvers are among best performing algorithms for solving maximum satisfiability problems. These solvers perform a sequence of relaxations of the formula to increase the lower bound on the optimal solution on each relaxation step. In addition, the relaxations allow generating a large set of minimal cores (MUSes) of the original formula. However, properties of these cores have not been investigated. In contrast, maximum hitting set-based (maxhs) solvers also extract a set of cores that posses a set of properties, e.g. the size of the minimal hitting set of the discovered cores equals the optimum when a maxhs solver terminates.

In this work, we analyze minimal cores and minimum correction sets (MinCSes) of the input formula and its sub-formulas that core-guided solvers produce during the execution. We demonstrate that a set of MUSes that a core-guided algorithm discovers posses the same key properties as cores extracted by maxhs solvers. For instance, we prove the size of a minimum hitting set of these the discovered cores equals to the optimal cost. We also show that it discovers all MinCSes of special subformulas of the input formula. We discuss theoretical and practical implications of our results.

17:30-18:30 Session 55: Logic Lounge (FLoC)
Thinking Fast and Slow in AI

ABSTRACT. Current AI systems lack several important human capabilities, such as adaptability, generalizability, self-control, consistency, common sense, and causal reasoning. We believe that existing cognitive theories of human decision making, such as the thinking fast and slow theory, can provide insights on how to advance AI systems towards some of these capabilities. In this talk, I will present the work done by IBM and collaborators in this space, including the definition of a general architecture that is based on fast/slow solvers and a metacognitive component. I will then present experimental results on the behavior of an instance of this architecture, for AI systems that make decisions about navigating in a constrained environment. The results will show how combining the fast and slow decision modalities allows the system to evolve over time and gradually pass from slow to fast thinking with enough experience, and that this greatly helps in decision quality, resource consumption, and efficiency.


Francesca Rossi is an IBM Fellow and the IBM AI Ethics Global Leader. She is a computer scientist with over 30 years of experience in AI research. Before joining IBM, she has been a professor of computer science at the University of Padova, Italy, for 20 years. Her research interests focus on artificial intelligence, specifically they include constraint reasoning, preferences, multi-agent systems, computational social choice, and collective decision making. She is also interested in ethical issues in the development and behavior of AI systems, in particular for decision support systems for group decision making. She is a fellow of both AAAI and of EurAI and she has been president of IJCAI and the Editor in Chief of the Journal of AI Research. She will be the next president of AAAI. She co-leads the IBM AI ethics board and she actively participate in many global multi-stakeholder initiatives on AI ethics. She is a member of the board of directors of the Partnership on AI and the industry representative in the steering committee of the Global Partnership on AI. She is a fellow of both the worldwide association of AI (AAAI) and of the European one (EurAI), and she will be the next president of AAAI from July 2022.