View: session overviewtalk overview
09:00  PRESENTER: Senne Berden ABSTRACT. Many realworld problems can be effectively solved by means of combinatorial optimization. However, appropriate models to give to a solver are not always available, and sometimes must be learned from historical data. Although some research has been done in this area, the task of learning (weighted partial) MAXSAT models has not received much attention thus far, even though such models can be used in many realworld applications. Furthermore, most existing work is limited to learning models from noncontextual data, where instances are labeled as solutions and nonsolutions, but without any specification of the contexts in which those labels apply. A recent approach named HASSLESLS has addressed these limitations: it can jointly learn hard constraints and weighted soft constraints from labeled contextual examples. However, it is hindered by long runtimes, as evaluating even a single candidate MAXSAT model requires solving as many models as there are contexts in the training data, which quickly becomes highly expensive when the size of the model increases. In this work, we address these runtime issues. To this end, we make two contributions. First, we propose a faster model evaluation procedure that makes use of knowledge compilation. Second, we propose a genetic algorithm named HASSLEGEN that decreases the number of evaluations needed to find good models. We experimentally show that both contributions improve on the state of the art by speeding up learning, which in turn allows higherquality MAXSAT models to be found within a given learning time budget. 
09:30  Selecting SAT Encodings for PseudoBoolean and Linear Integer Constraints PRESENTER: Felix UlrichOltean ABSTRACT. Many constraint satisfaction and optimisation problems can be solved effectively by encoding them as instances of the Boolean Satisfiability problem (SAT). However, even the simplest types of constraints have many encodings in the literature with widely varying performance, and the problem of selecting suitable encodings for a given problem instance is not trivial. We explore the problem of selecting encodings for pseudoBoolean and linear constraints using a supervised machine learning approach. We show that it is possible to select encodings effectively using a standard set of features for constraint problems; however we obtain better performance with a new set of features specifically designed for the pseudoBoolean and linear constraints. In fact, we achieve good results when selecting encodings for unseen problem classes. Our results compare favourably to AutoFolio when using the same feature set. We discuss the relative importance of instance features to the task of selecting the best encodings, and compare several variations of the machine learning method. 
10:00  Improved Sample Complexity Bounds for BranchandCut PRESENTER: Siddharth Prasad ABSTRACT. The branchandcut algorithm for integer programming has a wide variety of tunable parameters that have a huge impact on its performance, but which are challenging to tune by hand. An increasingly popular approach is to use machine learning to configure these parameters based on a training set of integer programs from the application domain. We bound how large the training set should be to ensure that for any configuration, its average performance over the training set is close to its expected future performance. Our guarantees apply to parameters that control the most important aspects of branchandcut: node selection, branching constraint selection, and cut selection, and are sharper and more general than those from prior research. 
09:00  CSP Beyond Tractable Constraint Languages PRESENTER: Stefan Szeider ABSTRACT. The constraint satisfaction problem (CSP) is among the most studied computational problems. While NPhard, many tractable subproblems have been identified (Bulatov 2017, Zuk 2017). Backdoors, introduced by Williams Gomes and Selman (2003), gradually extend such a tractable class to all CSP instances of bounded distance to the class. Backdoor size provides a natural but rather crude distance measure between a CSP instance and a tractable class. Backdoor depth, introduced by Mählmann, Siebertz, and Vigny (2021) for SAT, is a more refined distance measure, which admits the parallel utilization of different backdoor variables. Bounded backdoor size implies bounded backdoor depth, but there are instances of constant backdoor depth and arbitrarily large backdoor size. Dreier, Ordyniak, and Szeider (2022) provided fixedparameter algorithms for finding backdoors of small depth into Horn and 2CNF. In this paper, we consider backdoor depth for CSP. We consider backdoors w.r.t. tractable subproblems $C_\Gamma$ of the CSP defined by a constraint language $\Gamma$, i.e., where all the constraints use relations from the language $\Gamma$. Building upon Dreier et al.’s gametheoretic approach and their notion of separator obstructions, we show that for any finite, tractable, semiconservative constraint language $\Gamma$, the CSP is fixedparameter tractable parameterized by the backdoor depth into $\Gamma$ plus the domain size. With backdoors of low depth, we reach classes of instances that require backdoors of arbitrary large size. Hence, our results strictly generalize several known results for CSP that are based on backdoor size. 
09:30  ABSTRACT. A constraint language G has nonredundancy f(n) if every instance of CSP(G) with n variables contains at most f(n) nonredundant constraints. If G has maximum arity r then it has nonredundancy O(n^r), but there are notable examples for which this upper bound is far from the best possible. In general, the nonredundancy of constraint languages is poorly understood and little is known beyond the trivial bounds Omega(n) and O(n^r). In this paper, we introduce an elementary algebraic framework dedicated to the analysis of the nonredundancy of constraint languages. This framework relates redundancypreserving reductions between constraint languages to closure operators known as pattern partial polymorphisms, which can be interpreted as generic mechanisms to generate redundant constraints in CSP instances. We illustrate the power of this framework by deriving a simple characterisation of languages of arity r having nonredundancy Theta(n^r). 
10:00  FixedTemplate Promise Model Checking Problems PRESENTER: Kristina Asimi ABSTRACT. The fixedtemplate constraint satisfaction problem (CSP) can be seen as the problem of deciding whether a given primitive positive firstorder sentence is true in a fixed structure (also called model). We study a class of problems that generalizes the CSP simultaneously in two directions: we fix a set L of quantifiers and Boolean connectives, and we specify two versions of each constraint, one strong and one weak. Given a sentence which only uses symbols from L, the task is to distinguish whether the sentence is true in the strong sense, or it is false even in the weak sense. We classify the computational complexity of these problems for the existential positive equalityfree fragment of the firstorder logic, i.e., L={∃,∧,∨}, and we prove some upper and lower bounds for the positive equalityfree fragment, L={∃,∀,∧,∨}. The partial results are sufficient, e.g., for all extensions of the latter fragment. 
"Temporal and Data Logic, Linear Recurrences and Equation Systems": 6 papers (12 min presentation + 23 min Q&A)
09:00  Temporal Team Semantics Revisited PRESENTER: Jens Oliver Gutsfeld ABSTRACT. In this paper, we study a novel approach to asynchronous hyperproperties by reconsidering the foundations of temporal team semantics. We consider three logics: TeamLTL, TeamCTL and TeamCTL*, which are obtained by adding quantification over socalled time evaluation functions controlling the asynchronous progress of traces. We then relate synchronous TeamLTL to our new logics and show how it can be embedded into them. We show that the model checking problem for existential TeamCTL with Boolean disjunctions is highly undecidable by encoding recurrent computations of nondeterministic 2counter machines. Finally, we present a translation of TeamCTL* to Alternating Asynchronous Büchi Automata and obtain decidability results for the path checking problem as well as restricted variants of the model checking and satisfiability problems. 
09:15  PRESENTER: Jana Hofmann ABSTRACT. We study satisfiability for HyperLTL with a ∀∗∃∗ quantifier prefix, known to be highly undecidable in general. HyperLTL can express system properties that relate multiple traces (socalled hyperproperties), which are often combined with trace properties that specify functional behavior on single traces. Following this conceptual split, we first define several safety and liveness fragments of ∀∗∃∗ HyperLTL, and characterize the complexity of their (often much easier) satisfiability problem. We then add LTL trace properties as functional specifications. Though (highly) undecidable in many cases, this way of combining “simple” HyperLTL and arbitrary LTL also leads to interesting new decidable fragments. This systematic study of ∀∗∃∗ fragments is complemented by a new algorithm for ∀∃ ∗ HyperLTL satisfiability, which is incomplete but often successful. 
09:30  Reasoning on Data Words over Numeric Domains PRESENTER: Anthony Widjaja Lin ABSTRACT. We introduce parametric semilinear data logic (pSDL) for reasoning about data words with numeric data. The logic allows parameters, and Presburger guards on the data and on the Parikh image of equivalence classes (i.e. data counting), allowing us to capture data languages like: (1) each data value occurs at most once in the word and is an even number, (2) the subset of the positions containing data values divisible by 4 has the same number of a's and b's, (3) the data value with the highest frequency in the word is divisible by 3, and (4) each data value occurs at most once, and the set of data values forms an interval. We provide decidability and complexity results for the problem of membership and satisfiability checking over these models. In contrast to twovariable logic of data words and data automata (which also permit a form of data counting but no arithmetics over numeric domains and have incomparable inexpressivity), pSDL has elementary complexity of satisfiability checking. We show interesting potential applications of our models in databases and verification. 
09:45  PRESENTER: Sławomir Lasota ABSTRACT. We study orbitfinite systems of linear equations, in the setting of sets with atoms. Our principal contribution is a decision procedure for solvability of such systems. The procedure works for every field (and even commutative ring) under mild effectiveness assumptions, and reduces a given orbitfinite system to a number of finite ones: exponentially many in general, but polynomially many when atom dimension of input systems is fixed. Towards obtaining the procedure we push further the theory of vector spaces generated by orbitfinite sets, and show that each such vector space admits an orbitfinite basis. This fundamental property is a key tool in our development, but should be also of wider interest. 
10:00  Computing the Density of the Positivity Set for Linear Recurrence Sequences ABSTRACT. The set of indices that correspond to the positive entries of a sequence of numbers is called its positivity set. In this paper, we study the density of the positivity set of a given linear recurrence sequence, that is the question of how much more frequent are the positive entries compared to the non positive ones. We show that one can compute this density to arbitrary precision, as well as decide whether it is equal to zero (or one). If the sequence is diagonalisable, we prove that its positivity set is finite if and only if its density is zero. Lastly, arithmetic properties of densities are treated, in particular we prove that it is decidable whether the density is a rational number, given that the recurrence sequence has at most one pair of dominant complex roots. 
10:15  PRESENTER: James Worrell ABSTRACT. It is a longstanding open problem whether there is an algorithm to decide the Skolem Problem for linear recurrence sequences (LRS) over the integers, namely whether a given such sequence $\langle u_n\rangle_{n=0}^\infty$ has a zero term (i.e., whether $u_n=0$ for some $n$). A major breakthrough in the early 1980s established decidability for LRS of order four or less, i.e., for LRS in which every new term depends linearly on the previous four (or fewer) terms. The Skolem Problem for LRS of order $5$ or more, in particular, remains a major open challenge to this day. Our main contributions in this paper are as follows: First, we show that the Skolem Problem is decidable for \emph{reversible} LRS of order $7$ or less. (An integer LRS $\langle u_n \rangle_{n=0}^{\infty}$ is reversible if its unique extension to a biinfinite LRS $\langle u_n \rangle_{n=\infty}^{\infty}$ also takes exclusively integer values; a typical example is the classical Fibonacci sequence, whose biinfinite extension is $\langle \ldots, 5, 3, 2 , 1, 1, 0, 1, 1, 2, 3, 5, \ldots \rangle$.) Second, assuming the \emph{Skolem Conjecture} (a central hypothesis in Diophantine analysis, also known as the \emph{Exponential LocalGlobal Principle}), we show that the Skolem Problem for LRS of order $5$ is decidable, and exhibit a concrete procedure for solving it. 
09:00  PRESENTER: Olaf Beyersdorff ABSTRACT. To date, we know only a few handcrafted quantified Boolean formulas (QBFs) that are hard for central QBF resolution systems such as QRes and QURes, and only one specific QBF family to separate QRes and QURes. Here we provide a general method to construct hard formulas for QRes and QURes. The construction uses simple propositional formulas (e.g. minimally unsatisfiable formulas) in combination with easy QBF gadgets (\Sigma_2^b formulas without constant winning strategies). This leads to a host of new hard formulas, including new classes of hard random QBFs. We further present generic constructions for formulas separating QRes and QURes, and for separating QRes and longdistanceQRes. 
09:30  Should decisions in QCDCL follow prefix order? PRESENTER: Benjamin Böhm ABSTRACT. Quantified conflictdriven clause learning (QCDCL) is one of the main solving approaches for quantified Boolean formulas (QBF). One of the differences between QCDCL and propositional CDCL is that QCDCL typically follows the prefix order of the QBF for making decisions. We investigate an alternative model for QCDCL solving where decisions can be made in arbitrary order. The resulting system QCDCLANY is still sound and terminating, but does not necessarily allow to always learn asserting clauses or cubes. To address this potential drawback, we additionally introduce two subsystems that guarantee to always learn asserting clauses (QCDCLUNIANY) and asserting cubes (QCDCLEXIANY), respectively. We model all four approaches by formal proof systems and show that QCDCLUNIANY is exponentially better than QCDCL on false formulas, whereas QCDCLEXIANY is exponentially better than QCDCL on true QBFs. Technically, this involves constructing specific QBF families and showing lower and upper bounds in the respective proof systems. We complement our theoretical study with some initial experiments that confirm our theoretical findings. 
10:00  Changing Partitions in Rectangle Decision Lists ABSTRACT. Rectangle decision lists are a form of decision lists that were recently shown to have applications in the proof complexity of certain OBDDbased QBFsolvers. We consider a version of rectangle decision lists with changing partitions, which corresponds to QBFsolvers that may change the variable order of the OBDDs they produce. We show that even allowing one single partition change generally leads to exponentially more succinct decision lists. More generally, we show that there is a succinctness hierarchy: for every $k\in \mathbb{N}$, when going from $k$ partition changes to $k+1$, there are functions that can be represented exponentially more succinctly. As an application, we show a similar hierarchy for OBDD based QBFsolvers which shows that OBDDbased QBFsolvers could in principle far more efficient if changing the variable orders of the constructed OBDDs is allowed. 
09:15  Cutting a Proof into BiteSized Chunks: Incrementally Proving Termination in HigherOrder Term Rewriting 
09:15  Treewidthaware Reductions of Normal ASP to SAT– Is Normal ASP Harder than SAT after All? ABSTRACT. Answer Set Programming (ASP) is a paradigm and problem modeling and solving toolkit for KR that is often invoked. There are plenty of results dedicated to studying the hardness of (fragments of) ASP. So far, these studies resulted in characterizations in terms of computational complexity as well as in finegrained insights presented in form of dichotomystyle results, lower bounds when translating to other formalisms like Boolean satisfiability (SAT), and even detailed parameterized complexity landscapes. A quite generic and prominent parameter in parameterized complexity originating from graph theory is the socalled treewidth, which in a sense captures structural density of a program. Recently, there was an increase in the number of treewidthbased solvers related to SAT. While there exist several translations from (normal) ASP to SAT, yet there is no reduction preserving treewidth or at least being aware of the treewidth increase. This paper deals with a novel reduction from normal ASP to SAT that is aware of the treewidth, and guarantees that a slight increase of treewidth is indeed sufficient. Then, we also establish that when considering treewidth, already the fragment of normal ASP is slightly harder than SAT (under reasonable assumptions in computational complexity). This also confirms that our reduction probably cannot be significantly improved and that the slight increase of treewidth is unavoidable. 
09:30  Inference and Learning with Model Uncertainty in Probabilistic Logic Programs PRESENTER: Victor Verreet ABSTRACT. An issue that has received limited attention in probabilistic logic programming (PLP) is the modeling of socalled epistemic uncertainty, the uncertainty about the model itself. Accurately quantifying this uncertainty is paramount to robust inference, learning and ultimately decision making. We introduce BetaProbLog, a PLP language that can model epistemic uncertainty. BetaProbLog has sound semantics, an effective inference algorithm that combines Monte Carlo techniques with knowledge compilation, and a parameter learning algorithm. 
09:45  Tractable Reasoning using Logic Programs with Intensional Concepts PRESENTER: Ricardo Gonçalves ABSTRACT. Recent developments triggered by initiatives such as the Semantic Web, Linked Open Data, the Web of Things, and geographic information systems resulted in the wide and increasing availability of machineprocessable data and knowledge in the form of data streams and knowledge bases. Applications building on such knowledge require reasoning with modal and intensional concepts, such as time, space, and obligations, that are defeasible. E.g., in the presence of data streams, conclusions may have to be revised due to newly arriving information. The current literature features a variety of domainspecific formalisms that allow for defeasible reasoning using specific intensional concepts. However, many of these formalisms are computationally intractable and limited to one of the mentioned application domains. In this paper, we define a general method for obtaining defeasible inferences over intensional concepts, and we study conditions under which these inferences are computable in polynomial time. 
10:00  Towards Dynamic Consistency Checking in Goaldirected Predicate Answer Set Programming PRESENTER: Joaquin Arias ABSTRACT. Goaldirected evaluation of Answer Set Programs is gaining traction thanks to its amenability to create AI systems that can, due to the evaluation mechanism used, generate explanations and justifications. s(CASP) is one of these systems and has been already used to write reasoning systems in several fields. It provides enhanced expressiveness w.r.t. other ASP systems due to its ability to use constraints, data structures, and unbound variables natively. However, the performance of existing s(CASP) implementations is not on par with other ASP systems: model consistency is checked once models have been generated, in keeping with the generateandtest paradigm. In this work, we present a variation of the topdown evaluation strategy, termed Dynamic Consistency Checking, which interleaves model generation and consistency checking. This makes it possible to determine when a literal is not compatible with the denials associated to the global constraints in the program, prune the current execution branch, and choose a different alternative. This strategy is specially (but not exclusively) relevant in problems with a high combinatorial component. We have experimentally observed speedups of up to 90x w.r.t. the standard versions of s(CASP). 
10:15  ApproxASP – A Scalable Approximate Answer Set Counter PRESENTER: Mohimenul Kabir ABSTRACT. Answer Set Programming (ASP) is a framework in artificial intelligence and knowledge representation for declarative modeling and problem solving. Modern ASP solvers focus on the computation or enumeration of answer sets. However, a variety of probabilistic applications in reasoning or logic programming require counting answer sets. While counting can be done by enumeration, simple enumeration becomes immediately infeasible if the number of solutions is high. On the other hand, approaches to exact counting are of high worstcase complexity. In fact, in propositional model counting, exact counting becomes impractical. In this work, we present a scalable approach to approximate counting for ASP. Our approach is based on systematically adding parity (XOR) constraints to ASP programs, which divide the search space. We prove that adding random XOR constraints partitions the answer sets of an ASP program. In practice, we use a Gaussian elimination based approach by lifting ideas from SAT to ASP and integrate it into a state of the art ASP solver, which we call ApproxASP. Finally, our experimental evaluation shows the scalability of our approach over existing ASP systems. 
12:10  The Topology of Surprise PRESENTER: David FernándezDuque ABSTRACT. Knowability has a topological interpretation, where a proposition iff it is true in a neighbourhood of the evaluation point. In this paper we show how the Cantor derivative and the topological "perfect core", i.e., the largest denseinitself subspace of a topological space, provide a more refined topological model of learnability. In particular, we use these notions to elucidate the surprise exam paradox and similar epistemic puzzles. We then provide a complete deductive calculus and prove PSPACEcompleteness of the resulting modal logic. 
12:10  PRESENTER: Huifan Yang ABSTRACT. Open relation extraction (ORE) aims to assign semantic relationships among arguments, essential to the automatic construction of knowledge graphs (KG). The previous ORE methods and some benchmark datasets consider a relation between two arguments as definitely existing and in a simple singlespan form, neglecting possible nonexistent relationships and flexible, expressive multispan relations. However, detecting nonexistent relations is necessary for a pipelined information extraction system (first performing named entity recognition then relation extraction), and multispan relationships contribute to the diversity of connections in KGs. To fulfill the practical demands of ORE, we design a novel Querybased Multihead Open Relation Extractor (QuORE) to extract single/multispan relations and detect nonexistent relationships effectively. Moreover, we reconstruct some public datasets covering English and Chinese to derive augmented and multispan relation tuples. Extensive experiment results show that our method outperforms the stateoftheart ORE model LOREM in the extraction of existing single/multispan relations and the overall performances on four datasets with nonexistent relationships. Our code and data are publicly available. 
12:10  Towards a SAT Encoding for Quantum Circuits PRESENTER: Robert Wille ABSTRACT. Satisfiability Testing (SAT) techniques are wellestablished in classical computing where they are used to solve a broad variety of problems, e.g., in the design of classical circuits and systems. Analogous to the classical realm, quantum algorithms are usually modelled as circuits and similar design tasks need to be tackled. Thus, it is natural to pose the question whether these design tasks in the quantum realm can also be approached using SAT techniques. To the best of our knowledge, no SAT formulation for arbitrary quantum circuits exists and it is unknown whether such an approach is feasible at all. In this work, we define a propositional SAT encoding that, in principle, can be applied to arbitrary quantum circuits. However, we show that due to the inherent complexity of representing quantum states, constructing such an encoding is not feasible in general. Therefore, we establish general criteria for determining the feasibility of the proposed encoding and identify classes of quantum circuits fulfilling these criteria. We explicitly demonstrate how the proposed encoding can be applied to the class of Clifford circuits as a representative. Finally, we empirically demonstrate the applicability and efficiency of the proposed encoding for Clifford circuits. With these results, we lay the foundation for continuing the ongoing success of SAT in classical circuit and systems design for quantum circuits. 
Lunch will be held in Taub lobby (CP, LICS, ICLP) and in The Grand Water Research Institute (KR, FSCD, SAT).
14:00  Solving with Provably Correct Results: Beyond Satisfiability, and Towards Constraint Programming PRESENTER: Ciaran McCreesh ABSTRACT. Nowadays, Boolean Satisfiability (SAT) solvers aren't allowed to just assert that an instance is unsatisfiable: they're expected to also produce an independently verifiable proof that backs up this claim. This socalled proof logging has done wonders for solver reliability, and has also helped with social acceptability of computergenerated mathematical results. What if we could do the same for constraint programming, producing an auditable solving process where people whose lives or livelihoods are affected by a computer's decision would no longer have to resort to hoping that the system is free of bugs? We will start this tutorial by explaining what a proof really is, and what it means for an algorithm to certify the correctness of its answers by using proof logging. We will give a brief overview of the (extended) resolution and DRAT proof systems used in SAT proof logging. Then we will look at what is needed to bring proof logging to a broader range of solving algorithms, starting with some subgraphfinding algorithms, and moving towards a full CP solver with multiple global constraints and even some reformulation. We will show, by example, what you need to do to use this technology in your own algorithms, and how this relates to the underlying proof methods. We'll finish by discussing how proof logging can also support advanced solving techniques such as symmetry breaking. Surprisingly, all of this can be done in a simple proof format known as "cutting planes with strengthening" that does not know anything about nonbinary variables, global constraints, graphs, groups, or symmetries.

14:00  Trajectory Optimization for Safe Navigation in Maritime Traffic Using Historical Data PRESENTER: Chaithanya Basrur ABSTRACT. Increasing maritime trade often results in congestion in busy ports, thereby necessitating planning methods to avoid close quarter situations among vessels for safer navigation. Rapid digitization and automation of port operations and vessel navigation provide unique opportunities for significantly improving the navigation safety. Our key contributions are as follows. \textit{First}, given a set of future candidate trajectories for vessels involved in a traffic hotspot zone, we develop a multiagent trajectory optimization method to choose trajectories that result in the best overall close quarter risk reduction. Our novel MILPbased trajectory optimization method is more than an orderofmagnitude faster than a standard MILP for this problem, and runs in near realtime. \textit{Second}, although automation has improved in maritime traffic, current vessel traffic systems (in our case study of a busy Asian port) predict only a \textit{single} future trajectory of a vessel based on linear extrapolation. Therefore, using historical data, we learn a \textit{generative model} that predicts \textit{multiple} possible future trajectories of each vessel in a given traffic hotspot area, reflecting past movement patterns of vessels, and integrate it with our trajectory optimizer. \textit{Third}, we validate both our trajectory optimization and generative model extensively using a real world maritime traffic dataset containing 1.7 million trajectories over 1.5 years from one of the busiest ports in the world demonstrating effective risk reduction. 
14:30  PRESENTER: Raphaël Boudreault ABSTRACT. Ship refit projects require ongoing plan management to adapt to arising work and disruptions. Planners must sequence work activities in the best order possible to complete the project in the shortest time or within a defined period while minimizing overtime costs. Activity scheduling must consider milestones, resource availability constraints, and precedence relations. We propose a constraint programming approach for detailed ship refit planning at two granularity levels, daily and hourly schedule. The problem was modeled using the cumulative global constraint, and the SolutionBased Phase Saving heuristic was used to speedup the search, thus achieving the industrialization goals. Based on the evaluation of seven realistic instances over three objectives, the heuristic strategy proved to be significantly faster to find better solutions than using a baseline search strategy. The method was integrated into Refit Optimizer, a cloudbased prototype solution that can import projects from Primavera P6 and optimize plans. 
15:00  PRESENTER: Timothy Curry ABSTRACT. Software defined networks (SDNs) define a programmable network fabric that can be reconfigured to respect global networks properties. Securing against adversaries who try to exploit the network is an objective that conflicts with providing functionality. This paper proposes a twostage mixedinteger programming framework. The first stage automates routing decisions for the flows to be carried by the network while maximizing readability and ease of use for network engineers. The second stage is meant to quickly respond to security breaches to automatically decide on network countermeasures to block the detected adversary. Both stages are computationally challenging and the security stage leverages large neighborhood search to quickly deliver effective response strategies. The approach is evaluated on synthetic networks of various sizes and shown to be effective for both its functional and security objectives. 
14:00  PRESENTER: Francesco Dagnino ABSTRACT. Logical relations built on top of an operational semantics are one of the most successful proof methods in programming language semantics. In recent years, more and more expressive notions of operationallybased logical relations have been designed and applied to specific families of languages. However, a unifying abstract framework for operationallybased logical relations is still missing. We show how fibrations can provide a uniform treatment of operational logical relations, using as reference example a $\lambda$calculus with generic effects endowed with a novel, abstract operational semantics defined on a large class of categories. Moreover, this abstract perspective allows us to give a solid mathematical ground also to differential logical relations  a recently introduced notion of higherorder distance between programs  both pure and effectful, bringing them back to a common picture with traditional ones. 
14:30  PRESENTER: Paolo Pistone ABSTRACT. We explore the possibility of extending Mardare et al.'s quantitative algebras to the structures which naturally emerge from Combinatory Logic and the lambdacalculus. First of all, we show that the framework is indeed applicable to those structures, and give soundness and completeness results. Then, we prove some negative results which clearly delineate to which extent categories of metric spaces can be models of such theories. We conclude by giving several examples of nontrivial higherorder quantitative algebras. 
15:00  PRESENTER: Jonathan Sterling ABSTRACT. We propose a new sheaf semantics for secure information flow over a space of abstract behaviors, based on synthetic domain theory: security classes are open/closed partitions, types are sheaves, and redaction of sensitive information corresponds to restricting a sheaf to a closed subspace. Our securityaware computational model satisfies terminationinsensitive noninterference automatically, and therefore constitutes an intrinsic alternative to state of the art extrinsic/relational models of noninterference. Our semantics is the latest application of Sterling and Harper's recent reinterpretation of phase distinctions and noninterference in programming languages in terms of Artin gluing and topostheoretic open/closed modalities. Prior applications include parametricity for ML modules, the proof of normalization for cubical type theory by Sterling and Angiuli, and the costaware logical framework of Niu et al. In this paper we employ the phase distinction perspective twice: first to reconstruct the syntax and semantics of secure information flow as a lattice of phase distinctions between "higher" and "lower" security, and second to verify the computational adequacy of our sheaf semantics with respect to a version of Abadi et al.'s dependency core calculus to which we have added a construct for declassifying termination channels. 
Two Languages, One System:Tightly Connecting XSB Prolog and Python
Abstract: Python, ranked first on the May 2022 Tiobe index, is a hugely popular language, heavily used in machine learning and other applications. Prolog, ranked twentyfirst the May 2022 Tiobe index, while less popular has important reasoning and knowledge representation capabilities, particularly since modern Prologs support constraintbased reasoning, tablingbased deduction, and probabilistic inference. Despite their differences, Prolog and Python share important commonalities. First, both Prolog and CPython (the standard Python implementation) are written in C with welldeveloped interfaces to other C programs. In addition, both languages are dynamically typed with data structures that are recursively generated in just a few ways. Infact, nearly all core data structures of the two languages can be efficiently bitranslated, leading to a tight connection of the two systems. This talk presents the design, experience, and implications of such a connection using XSB Prolog version 5.0. The connection for XSB to call Python has led to XSB orchestrating commercial projects using interfaces to Elastic search, dense vector storage, nlp systems, Google maps, and to a 14.6 billion triple Wikidata graph. The connection for Python to call XSB allows XSB to be imported as any other Python module so that XSB can easily be invoked from Jupyter notebooks or other graphical interfaces. On a more speculative level, the talk mentions how this work might be leveraged for research in neurosymbolic learning, natural language processing and crosslanguage type inference.
14:00  PRESENTER: Senthil Rajasekaran ABSTRACT. The problems of \emph{verification} and \emph{realizability} are two central themes in the analysis of reactive systems. When multiagent systems are considered, these problems have natural analogues of existence (nonemptiness) of purestrategy Nash equilibria and verification of purestrategy Nash equilibria. Recently, this body of work has begun to include finitehorizon temporal goals. With finitehorizon temporal goals, there is a natural hierarchy of goal representation, ranging from deterministic finite automata (DFA), to nondeterministic finite automata (NFA), and to alternating finite automata (AFA), with a worstcase exponential gap between each successive representation. Previous works showed that the realizability problem with DFA goals was PSPACEcomplete, while the realizability problem with temporal logic goals is in 2EXPTIME. In this work we study both the realizability and the verification problems with respect to various goal representations. We first show that the realizability problem for AFA goals is 2EXPTIMEcomplete, thus establishing a strict complexity gap between realizability with respect to DFA goals and with respect to AFA goals. We then contrast this complexity gap with the complexity of the verification problem, where we show that verification with respect to DFA goals, NFA goals, and AFA goals are all PSPACEcomplete. 
14:30  Towards an EnthymemeBased Communication Framework in MultiAgent Systems PRESENTER: Alison R. Panisson ABSTRACT. Communication is one of the most important aspects of multiagent systems. Among the different communication techniques applied to multiagent systems, argumentationbased approaches have received special interest from the community, because allowing agents to exchange arguments provides a rich form of communication. In contrast to the benefits that argumentationbased techniques provide to multiagent communication, extra weight on the communication infrastructure results from the additional information exchanged by agents, which could restrict the practical use of such techniques. In this work, we propose an argumentation framework whereby agents are able to exchange shorter messages when engaging in dialogues by omitting information that is common knowledge (e.g., information about a shared multiagent organisation). In particular, we focus on using enthymemes, shared argumentation schemes (i.e., reasoning patterns from which arguments are instantiated), and common organisational knowledge to build an enthymemebased communication framework. We show that our approach addresses some of Grice's maxims, in particular that agents can be brief in communication, without any loss in the content of the intended arguments. 
15:00  Automatic Synthesis of Dynamic Norms for MultiAgent Systems PRESENTER: Giuseppe Perelli ABSTRACT. Norms have been widely proposed to coordinate and regulate multiagent systems (MAS) behaviour. We consider the problem of synthesising and revising the set of norms in a normative MAS to satisfy a design objective expressed in Alternating Time Temporal Logic (ATL*). ATL* is a wellestablished language for strategic reasoning, which allows the specification of norms that constrain the strategic behaviour of agents. We focus on dynamic norms, that is, norms corresponding to Mealy machines, that allow us to place different constraints on the agents' behaviour depending on the state of the norm and the state of the underlying MAS. We show that synthesising dynamic norms is (k + 1)EXPTIME, where k is the alternation depth of quantifiers in the ATL* specification. Note that for typical cases of interest, k is either 1 or 2. We also study the problem of removing existing norms to satisfy a new objective, which we show to be 2EXPTIMEComplete. 
"Lambda Calculus, Quantum Programming, Games in Category Theory": 6 papers (12 min presentation + 23 min Q&A)
14:00  PRESENTER: Paolo Pistone ABSTRACT. We show that an intuitionistic variation on counting propositional logic corresponds, in the sense of Curry and Howard, to an expressive type system for the event probabilistic lambdacalculus, itself a vehicle calculus into which both callbyname and callbyvalue evaluation of discrete randomized functional programs can be captured. Remarkably, proofs (respectively, types) do not only guarantee that validity (respectively, termination) holds, but also reveal the underlying probability. We finally show that by endowing the introduced type system with an intersection operator, one gets a system precisely capturing the probabilistic behavior of lambdaterms. 
14:15  Resource approximation for the lambdamucalculus ABSTRACT. The lambdamucalculus plays a central role in the theory of programming languages as it extends the CurryHoward correspondence to classical logic. A major drawback is that it does not satisfy Böhm's Theorem and it lacks the corresponding notion of approximation. On the contrary, we show that Ehrhard and Regnier's Taylor expansion can be easily adapted, thus providing a resource conscious approximation theory. This produces a sensible lambdamutheory with which we prove some advanced properties of the lambdamucalculus, such as Stability and Perpendicular Lines Property, from which it follows the impossibility of parallel computations. 
14:30  Graded Monads and Behavioural Equivalence Games PRESENTER: Chase Ford ABSTRACT. The framework of graded semantics uses graded monads to capture behavioural equivalences of varying granularity, for example as found on the lineartime/branchingtime spectrum, over general system types. We describe a generic SpoilerDuplicator game for graded semantics that is extracted from the given graded monad, and may be seen as playing out an equational proof; instances include standard pebble games for simulation and bisimulation as well as games for tracelike equivalences and coalgebraic behavioural equivalence. Considerations on an infinite variant of such games lead to a novel notion of infinitedepth graded semantics. Under reasonable restrictions, the infinitedepth graded semantics associated to a given graded equivalence can be characterized in terms of a determinization construction for coalgebras under the equivalence at hand. 
14:45  The PebbleRelation Comonad in Finite Model Theory PRESENTER: Yoàv Montacute ABSTRACT. The pebbling comonad, introduced by Abramsky, Dawar and Wang, provides a categorical interpretation for the kpebble games from finite model theory. The coKleisli category of the pebbling comonad specifies equivalences under different fragments and extensions of infinitary kvariable logic. Moreover, the coalgebras over this pebbling comonad characterise treewidth and correspond to tree decompositions. In this paper we introduce the pebblerelation comonad, which characterises pathwidth and whose coalgebras correspond to path decompositions. We further show that the existence of a coKleisli morphism in this comonad is equivalent to truth preservation in the restricted conjunction fragment of kvariable infinitary logic. We do this using Dalmau's pebblerelation game and an equivalent allinone pebble game. We then provide a similar treatment to the corresponding coKleisli isomorphisms via a bijective version of the allinone pebble game with a hidden pebble placement. Finally, we show as a consequence a new Lovásztype theorem relating pathwidth to the restricted conjunction fragment of kvariable infinitary logic with counting quantifiers. 
15:00  Quantum Expectation Transformers for Cost Analysis PRESENTER: Vladimir Zamdzhiev ABSTRACT. We introduce a new kind of expectation transformer for a mixed classicalquantum programming language. Our semantic approach relies on a new notion of a cost structure, which we introduce and which can be seen as a specialisation of the Kegelspitzen of Keimel and Plotkin. We show that our weakest precondition analysis is both sound and adequate with respect to the operational semantics of the language. Using the induced expectation transformer, we provide formal analysis methods for the expected cost analysis and expected value analysis of classicalquantum programs. We illustrate the usefulness of our techniques by computing the expected cost of several wellknown quantum algorithms and protocols, such as coin tossing, repeat until success, entangled state preparation, and quantum walks. 
15:15  PRESENTER: Junyi Liu ABSTRACT. We study expected runtimes for quantum programs.Inspired by recent work on probabilistic programs, we first define expected runtime as a generalisation of quantum weakest precondition. Then, we show that the expected runtime of a quantum program can be represented as the expectation of an observable (in physics). A method for computing the expected runtimes of quantum programs in finitedimensional state spaces is developed. Several examples are provided as applications of this method, including computing the expected runtime of quantum Bernoulli Factory  a quantum algorithm for generating random numbers. In particular, using our new method, an open problem of computing the expected runtime of quantum random walks introduced by Ambainis et al. (STOC 2001) is solved. 
14:00  SAT Preprocessors and Symmetry ABSTRACT. Exploitation of symmetries is an indispensable approach to solve certain classes of difficult SAT instances. Numerous techniques for the use of symmetry in SAT have evolved over the past few decades. But no matter how symmetries are used precisely, they have to be detected first. We investigate how to detect more symmetry, faster. The initial idea is to reap the benefits of SAT preprocessing for symmetry detection. As it turns out, applying an offtheshelf preprocessor before handling symmetry does not work: the preprocessor can haphazardly remove symmetry from formulas, severely impeding symmetry exploitation. Our main contribution is a theoretical framework that captures the relationship of SAT preprocessing techniques and symmetry. Based on this, we create a symmetryaware preprocessor that can be applied safely before handling symmetry. We then demonstrate that applying the preprocessor does not only substantially decrease symmetry detection and breaking times, but also uncovers hidden symmetry not detectable in the original instances. The novelty of our approach lies in the fact that we make use of SAT techniques for symmetry detection, yielding a new applicationspecific approach to symmetry detection for SAT. 
14:30  PRESENTER: Jakob Bach ABSTRACT. Hard combinatorial problems such as propositional satisfiability are ubiquitous. The holy grail are solution methods that show good performance on all problem instances. However, new approaches emerge regularly, some of which are complementary to existing solvers in that they only run faster on some instances but not on many others. While portfolios, i.e., sets of solvers, have been touted as useful, putting together such portfolios also needs to be efficient. In particular, it remains an open question how well portfolios can exploit the complementarity of solvers. This paper features a comprehensive analysis of portfolios of recent SAT solvers, the ones from the SAT Competitions 2020 and 2021. We determine optimal portfolios with exact and approximate approaches and study the impact of portfolio size k on performance. We also investigate how effective offtheshelf prediction models are for instancespecific solver recommendations. One result is that the portfolios found with an approximate approach are as good as the optimal solution in practice. We also observe that marginal returns decrease very quickly with larger k, and our prediction models do not give way to better performance beyond very small portfolio sizes. 
15:00  SATbased Leximax Optimisation Algorithms PRESENTER: Mikolas Janota ABSTRACT. In several realworld problems, it is often the case that the goal is to optimise several objective functions. However, usually there is not a single optimal objective vector. Instead, there are many optimal objective vectors known as Paretooptima. Finding all Paretooptima is computationally expensive and the number of Paretooptima can be too large for a user to analyse. A compromise can be made by defining an optimisation criterion that integrates all objective functions. In this paper we propose several SATbased algorithms to solve multiobjective optimisation problems using the leximax criterion. The leximax criterion is used to obtain a Paretooptimal solution with a small tradeoff between the objective functions, which is suitable in problems where there is an absence of priorities between the objective functions. Experimental results on the MultiObjective Package Upgradeability Optimisation problem show that the SATbased algorithms are able to outperform the Integer Linear Programming (ILP) approach when using noncommercial ILP solvers. Additionally, experimental results on selected instances from the MaxSAT evaluation adapted to the multiobjective domain show that our approach outperforms the ILP approach using commercial solvers. 
15:00  PRESENTER: Fabrizio Riguzzi ABSTRACT. In the last years, the Hyperparameter Optimization (HPO) research field has gained more and more attention. Many works have focused on finding the best combination of the network’s hyperparameters (HPs) or architecture. The stateoftheart algorithm in terms of HPO is Bayesian Optimization (BO). This is because it keeps track of past results obtained during the optimization and uses this experience to build a probabilistic model mapping HPs to a probability density of the objective function. BO builds a surrogate probabilistic model of the objective function, finds the HPs values that perform best on the surrogate model and updates it with new results. In this work, a system was developed, called Symbolic DNNTuner which logically evaluates the results obtained from the training and the validation phase and, by applying symbolic tuning rules, fixes the network architecture, and its HPs, therefore improving performance. Symbolic DNNTuner improve BO applied to Deep Neural Network (DNN) by adding an analysis of the results of the network on training and validation sets. This analysis is performed by exploiting rulebased programming, and in particular by using Probabilistic Logic Programming (PLP). 
15:15  Graphbased Interpretation of Normal Logic Programs PRESENTER: Gopal Gupta ABSTRACT. In this paper we present a dependency graphbased methods for computing the various semantics of normal logic programs. Our method employs conjunction nodes to unambiguously represent the dependency graph of normal logic programs. The dependency graph can be transformed suitably in a semantics preserving manner and retranslated into an equivalent normal logic program. This transformed normal logic program can be augmented with a few rules written in answer set programming (ASP), and then the CLINGO system used to compute its answer sets. Depending on how these additional rules are coded in ASP, one can compute models of the original normal logic program under the wellfounded semantics, the stable model semantics, or the costable model semantics. In each case, justification for each atom in the model is also generated. We report on the implementation of our method as well as its performance evaluation. 
16:00  PRESENTER: Simon de Givry ABSTRACT. While processor frequency stagnates for two decades, the number of available cores in servers or clusters is still growing, offering the opportunity for significant speedup in combinatorial optimization. Parallelization of exact methods remains a difficult challenge. We revisit the concept of parallel branchandbound in the framework of weighted constraint satisfaction problems. We show how to adapt an anytime hybrid bestfirst search algorithm in a master/worker protocol. The resulting parallel algorithm achieves a good loadbalancing without introducing new parameters to be tuned as in the embarrassingly parallel search (EPS) approach. It has also a small overhead due to its light communication messages. We performed an experimental evaluation on several benchmarks, comparing our parallel algorithm to its sequential version. We observed linear speedup in some cases. Our approach compared favourably to the EPS approach and also a stateoftheart parallel exact integer programming solver. 
16:30  Sequence Variables for Routing Problems PRESENTER: Augustin Delecluse ABSTRACT. Constraint Programming (CP) is one of the most flexible approaches for modeling and solving vehicle routing problems (VRP). This paper proposes the sequence variable domain, that is inspired by the insertion graph and the subset bound domain for set variables. This domain representation, which targets VRP applications, allows for an efficient insertionbased search on a partial tour and the implementation of simple, yet efficient filtering algorithms for constraints that enforce timewindows on the visits and capacities on the vehicles. Experiment results demonstrate the efficiency and flexibility of this CP domain for solving some hard VRP problems, including the Dial A Ride, the Patient Transportation, and the asymmetric TSP with time windows. 
17:00  PRESENTER: Anthony Karahalios ABSTRACT. We present an exact algorithm for graph coloring and maximum clique problems based on SAT technology. It relies on four subalgorithms that alternatingly compute cliques of larger size and colorings with fewer colors. We show how these techniques can mutually help each other: larger cliques facilitate finding smaller colorings, which in turn can boost finding larger cliques. We evaluate our approach on the DIMACS graph coloring suite, and first show that our algorithm can improve the stateoftheart MaxSATbased solver IncMaxCLQ for finding maximum cliques. For the graph coloring problem, we close two open instances, decrease two upper bounds, and increase one lower bound. 
16:0017:00: Tutorial
17:0017:30: Machine Learning
16:00  Building a Constraint Programming Solver Guided by Machine Learning: Opportunities and Challenges PRESENTER: LouisMartin Rousseau ABSTRACT. In the last years, deep reinforcement learning (DRL) has shown its promise for designing good heuristics dedicated to solve NPhard combinatorial optimization problems. Many recent papers proposed to use directly a learned heuristic to solve the problem in an endtoend fashion. However, such an approach has two shortcomings: (1) it only provides an approximate solution with no systematic ways to improve it nor to prove optimality, (2) it is often limited to standard academic problems (e.g., the travelling salesman problem) and has difficulties to handle complex combinatorial constraints that are present in realistic problems. Constraint programming (CP), as we known, is a generic tool to solve combinatorial optimization problems. Given enough time, its complete search procedure will always find the optimal solution. However, a critical design choice, that makes CP nontrivial to use in practice, is the branching strategy that controls how the space is explored. In this tutorial, we will show how this strategy can be delegated to a DRL agent. First, we will present an initial proof of concept, showing that a CP solver can successfully leverage a learned heuristic. However, the implementation of this prototype suffers from severe inefficiencies issues regarding the execution time. The main reason is that existing CP solvers were not designed to handle learning components in its core engine. This brings the motivation to build a new CP solver, with a direct support for learning heuristics. We will then present SeaPearl, a new opensource CP solver in Julia that has been designed on this purpose, together with the design choices we made. We will also highlight the different challenges that we are currently encountering and preliminary results. 
17:00  Combining Reinforcement Learning and Constraint Programming for SequenceGeneration Tasks with Hard Constraints PRESENTER: Daphné Lafleur ABSTRACT. While Machine Learning (ML) techniques are good at generating data similar to a dataset, they lack the capacity to enforce constraints. On the other hand, any solution to a Constraint Programming (CP) model satisfies its constraints but has no obligation to imitate a dataset. Yet, we sometimes need both. In this paper we borrow RLTuner, a Reinforcement Learning (RL) algorithm introduced to tune neural networks, as our enabling architecture to exploit the respective strengths of ML and CP. RLTuner maximizes the sum of a pretrained network's learned probabilities and of manuallytuned penalties for each violated constraint. We replace the latter with outputs of a CP model representing the marginal probabilities of each note and the number of constraint violations. As was the case for the original RLTuner, we apply our algorithm to music generation since it is a highlyconstrained domain for which CP is especially suited. We show that combining ML and CP, as opposed to using them individually, allows the agent to reflect the pretrained network while taking into account constraints, leading to melodic lines that respect both the corpus' style and the music theory constraints. 
16:00  PRESENTER: Christophe Ringeissen ABSTRACT. Matching algorithms are often central subroutines in many areas of automated reasoning. They are used in areas such as functional programming, rulebased programming, automated theorem proving, and the symbolic analysis of security protocols. Matching is related to unification but provides a somewhat simplified problem. Thus, in some cases, we can obtain a matching algorithm even if the unification problem is undecidable. In this paper we consider a hierarchical approach to constructing matching algorithms. The hierarchical method has been successful for developing unification algorithms for theories defined over a constructor subtheory. We show how the approach can be extended to matching problems which allows for the development, in a modular way, of hierarchical matching algorithms. Here we focus on regular theories, where both sides of each equational axiom have the same set of variables. We show that the combination of two hierarchical matching algorithms leads to a hierarchical matching algorithm for the union of regular theories sharing only a common constructor subtheory. 
16:30  PRESENTER: Manfred SchmidtSchauss ABSTRACT. Antiunification is the task of generalizing a set of expressions in the most specific way. It was extended to the nominal framework by Baumgarter, Kutsia, Levy and Villaret, who defined an algorithm solving the nominal antiunification problem, which runs in polynomial time. Unfortunately the type of the set of solutions in nominal antiunification (with explicit atoms) is infinitary, since every solution set can be strictly refined. In this paper, we present a more general approach to nominal antiunification that uses atomvariables instead of explicit atoms, and two variants of freshness constraints: NLAconstraints (with atomvariables), and EQRconstraints based on Equivalence Relations on atomvariables. The idea of atomvariables is that different atomvariables may be instantiated with identical or different atoms. Albeit simple, this freedom in the formulation increases its application potential: we provide an algorithm that is finitary for the NLAfreshness constraints, and for EQRfreshness constraints it computes a unique least general generalization. There is a price to pay in the general case: checking freshness constraints and other related logical questions will require exponential time. The setting of Baumgartner et al. is improved by the atomonly case, which runs in polynomial time and computes a unique least general generalization. 
17:00  PRESENTER: Daniele NantesSobrinho ABSTRACT. Implementing unification modulo Associativity and Commutativity (AC) axioms is crucial in rewritebased programming and theorem provers. We modify Stickel's seminal ACunification algorithm to avoid mutual recursion and formalise it in the PVS proof assistant. More precisely, we prove the adjusted algorithm's termination, soundness, and completeness. To do this, we adapted Fages' termination proof, providing a unique elaborated measure that guarantees termination of the modified ACunification algorithm. This development (to the best of our knowledge) provides the first fully formalised ACunification algorithm. 
16:00  PRESENTER: David Geleßus ABSTRACT. Even though the core of the Prolog programming language has been standardized by ISO since 1995, it remains difficult to write complex Prolog programs that can run unmodified on multiple Prolog implementations. Indeed, implementations sometimes deviate from the ISO standard and the standard itself fails to cover many features that are essential in practice. Most Prolog applications thus have to rely on nonstandard features, often making them dependent on one particular Prolog implementation and incompatible with others. We examine one such Prolog application: ProB, which has been developed for over 20 years in SICStus Prolog. The article describes how we managed to refactor the codebase of ProB to also support SWIProlog, with the goal of verifying ProB’s results using two independent toolchains. This required a multitude of adjustments, ranging from extending the SICStus emulation in SWIProlog on to better modularizing the monolithic ProB codebase. We also describe notable compatibility issues and other differences that we encountered in the process, and how we were able to deal with them with few major code changes. 
16:22  Building Information Modeling using Constraint Logic Programming PRESENTER: Joaquin Arias ABSTRACT. Building Information Modeling (BIM) produces threedimensional objectoriented models of buildings combining the geometrical information with a wide range of properties about materials, products, safety, and so on. BIM is slowly but inevitably revolutionizing the architecture, engineering, and construction (AEC) industry. Buildings need to be compliant with regulations about stability, safety, and environmental impact. Manual compliance checking is tedious and errorprone, and amending flaws discovered only at construction time causes huge additional costs and delays. Several tools can check BIM models for conformance with rules/guidelines. For example, Singapore’s CORENET eSubmission System checks fire safety. But since the current BIM exchange format only contains basic information of building objects, a separate, adhoc model preprocessing is required to determine, e.g., evacuation routes. Moreover, they face difficulties in adapting existing builtin rules and/or adding new ones (to cater for building regulations, that can vary not only among countries but also among parts of the same city), if at all possible. We propose the use of logicbased executable formalisms (CLP and Constraint ASP) to couple BIM models with advanced knowledge representation and reasoning capabilities. Previous experience shows that such formalisms can be used to uniformly capture and reason with knowledge (including ambiguity) in a large variety of domains. Additionally, incorporating checking within design tools makes it possible to ensure that models are rulecompliant at every step. This also prevents erroneous designs from having to be (partially) redone, which is also costly and burdensome. To validate our proposal, we implemented a preliminary reasoner under CLP(Q/R) and ASP with constraints and evaluated it with several BIM models. 
16:44  ABSTRACT. Graph Neural Networks share with Logic Programming several key relational inference mechanisms. The datasets on which they are trained and evaluated can be seen as database facts containing ground terms. This makes possible modeling their inference mechanisms with equivalent logic programs, to better understand not just how they propagate information between the entities involved in the machine learning process but also to infer limits on what can be learned from a given dataset and how well that might generalize to unseen test data. This leads us to the key idea of this paper: modeling with the help of a logic program the information flows involved in learning to infer from the link structure of a graph and the information content of its nodes properties of new nodes, given their known connections to nodes with possibly similar properties. The problem is known as {\em graph node property prediction} and our approach will consist in emulating with help of a Prolog program the key information propagation steps of a Graph Neural Network's training and inference stages. We test our a approach on the {\em ogbnarxiv} node property inference benchmark. To infer class labels for nodes representing papers in a citation network, we distill the dependency trees of the text associated to each node into directed acyclic graphs that we encode as ground Prolog terms. Together with the set of their references to other papers, they become facts in a database on which we reason with help of a Prolog program that mimics the information propagation in graph neural networks predicting node properties. In the process, we invent ground term similarity relations that help infer labels in the test set by propagating node properties from similar nodes in the training set and we evaluate their effectiveness in comparison with that of the graph's link structure. Finally, we implement explanation generators that unveil performance upper bounds inherent to the dataset. As a practical outcome, we obtain a logic program, that, when seen as machine learning algorithm, performs close to the state of the art on the node property prediction benchmark. 
17:06  ABSTRACT. With help of a compact Prologbased theorem prover for Intuitionistic Propositional Logic, we synthesize minimal assumptions under which a given formula formula becomes a theorem. After applying our synthesis algorithm to cover basic abductive reasoning mechanisms, we synthesize conjunctions of literals that mimic rows of truth tables in classical or intermediate logics and we abduce conditional hypotheses that turn the theorems of classical or intermediate logics into theorems in intuitionistic logic. One step further, we generalize our abductive reasoning mechanism to synthesize more expressive sequent premises using a minimal set of canonical formulas, to which arbitrary formulas in the calculus can be reduced while preserving their provability. Organized as a selfcontained literate Prolog program, the paper supports interactive exploration of its content and ensures full replicability of our results. 
16:00  On the Expressive Power of Intermediate and Conditional Effects in Temporal Planning PRESENTER: Nicola Gigante ABSTRACT. Automated planning is the task of finding a sequence of actions that reach a desired goal, given a description of their applicability and their effects on the world. In temporal planning, actions have a duration and can overlap in time. In modern temporal planning formalisms, two features have been introduced which are very useful from a modeling perspective, but are not yet thoroughly understood: intermediate conditions and effects (ICE) and conditional effects. The expressive power of such constructs is yet not well comprehended, especially when time is dense, and no minimum separation is required between mutex events. This paper reveals that both ICE and conditional effects do not add expressive power with regards to common temporal planning formalisms. In particular, we show how they can be compiled away using a polynomialsize encoding that makes no assumptions on the time domain. This encoding advances our understanding of these features, and allow their use with simple temporal planners that lack their support. Moreover, it provides a constructive proof that temporal planning with ICE and conditional effects remains PSPACEcomplete. 
16:30  Unique Characterisability and Learnability of Temporal Instance Queries PRESENTER: Yury Savateev ABSTRACT. We aim to determine which temporal instance queries can be uniquely characterised by a (polynomialsize) set of positive and negative temporal data examples. We start by considering queries formulated in fragments of propositional linear temporal logic LTL that correspond to conjunctive queries (CQs) or extensions thereof induced by the until operator. Not all of these queries admit polynomial characterisations, but by imposing a further restriction to pathshaped queries we identify natural classes that do. We then investigate how far the obtained characterisations can be lifted to temporal knowledge graphs queried by 2D languages combining LTL with concepts in description logics EL or ELI (i.e., treeshaped CQs). While temporal operators in the scope of description logic constructors can destroy polynomial characterisability, we obtain general transfer results for the case when description logic constructors are within the scope of temporal operators. We finally apply our characterisations to establish polynomial learnability of temporal instance queries using membership queries in the active learning framework. 
17:00  A Gödel calculus for Linear Temporal Logic PRESENTER: David FernándezDuque ABSTRACT. We consider GTL, a variant of linear temporal logic based on GödelDummett propositional logic. In recent work, we have shown this logic to enjoy natural semantics both as a fuzzy logic and as a superintuitionistic logic. Using semantical methods, the logic was shown to be PSPACEcomplete. In this paper we provide a deductive calculus for GTL, and show this calculus to be sound and complete for the abovementioned semantics. 
16:00  PRESENTER: Quentin Manière ABSTRACT. While ontologymediated query answering most often adopts (unions of) conjunctive queries as the query language, some recent works have explored the use of counting queries coupled with DLLite ontologies. The aim of the present paper is to extend the study of counting queries to Horn description logics outside the DLLite family. Through a combination of novel techniques, adaptations of existing constructions, and new connections to closed predicates, we achieve a complete picture of the data and combined complexity of answering counting conjunctive queries (CCQs) and cardinality queries (a restricted class of CCQs) in ELHI⊥ and its various sublogics. Notably, we show that CCQ answering is 2EXPcomplete in combined complexity for ELHI⊥ and every sublogic that extends EL or DLLiteposH. Our study not only provides the first results for counting queries beyond DLLite, but it also closes some open questions about the combined complexity of CCQ answering in DLLite. 
16:30  PRESENTER: Lukas Schulze ABSTRACT. We study the evaluation of ontologymediated queries (OMQs) on databases of bounded cliquewidth from the viewpoint of parameterized complexity theory, using as the parameter the size of the OMQ (plus the cliquewidth, in upper complexity bounds, for increased generality). As the ontology language, we use the description logics ALC and ALCI and the guarded twovariable fragment GF2 of firstorder logic. Queries are atomic queries (AQs), conjunctive queries (CQs), and unions of CQs. All resulting OMQ problems are fixedparameter linear (FPL) and we provide a careful analysis of the dependence of the running time on the parameter, exhibiting several interesting effects. For instance, GF2 with AQs requires double exponential running time unless the exponential time hypothesis (ETH) fails, while OMQ evaluation on unrestricted databases is only ExpTimecomplete in this setting. For ALCI, in contrast, single exponential running time suffices. Interestingly, this is due to the lower succinctness of ALCI rather than to its lower expressive power. 
17:00  Finite Entailment of UCRPQs over ALC Ontologies PRESENTER: Albert Gutowski ABSTRACT. We investigate the problem of finite entailment of ontologymediated queries. We consider the expressive query language, unions of conjunctive regular path queries (UCRPQs), extending the wellknown class of union of conjunctive queries, with regular expressions over roles. We look at ontologies formulated using the description logic ALC, and show a tight 2ExpTime upper bound for entailment of UCRPQs. At the core of our decision procedure, there is a novel automatabased technique introducing a stratification of interpretations induced by the deterministic finite automaton underlying the input UCRPQ. 
"Type and Category Theory": 6 papers (12 min presentation + 23 min Q&A)
16:00  Semantics for twodimensional type theory PRESENTER: Benedikt Ahrens ABSTRACT. In this work, we propose a general notion of model for twodimensional type theory, in the form of comprehension bicategories. Examples of comprehension bicategories are plentiful; they include interpretations of directed type theory previ ously studied in the literature. From comprehension bicategories, we extract a core syntax, that is, judgement forms and structural inference rules, for a twodimensional type theory. We prove soundness of the rules by giving an interpretation in any comprehension bicategory. The semantic aspects of our work are fully checked in the Coq proof assistant, based on the UniMath library. This work is the first step towards a theory of syntax and semantics for higherdimensional directed type theory. 
16:15  ABSTRACT. We prove normalization for MTT, a general multimodal dependent type theory capable of expressing modal type theories for guarded recursion, internalized parametricity, and various other prototypical modal situations. We prove that deciding type checking and conversion in MTT can be reduced to deciding the equality of modalities in the underlying modal situation, immediately yielding a type checking algorithm for all instantiations of MTT in the literature. This proof follows from a generalization of synthetic Tait computability—an abstract approach to gluing proofs—to account for modalities. This extension is based on MTT itself, so that this proof also constitutes a significant case study of MTT. 
16:30  Zigzag normalisation for associative ncategories PRESENTER: Lukas Heidemann ABSTRACT. The theory of associative ncategories has recently been proposed as a strictly associative and unital approach to higher category theory. As a foundation for a proof assistant, this is potentially attractive, since it has the potential to allow simple formal proofs of complex highdimensional algebraic phenomena. However, the theory relies on an implicit term normalisation procedure to recognize correct composites, with no recursive method available for computing it. Here we describe a new approach to term normalisation in associative ncategories, based on the categorical zigzag construction. This radically simplifies the theory, and yields a recursive algorithm for normalisation, which we prove is correct. Our use of categorical lifting properties allows us to give efficient proofs of our results. Our normalisation algorithm forms a core component of a proof assistant, and we illustrate our scheme with worked examples. 
16:45  Syllepsis in Homotopy Type Theory PRESENTER: G. A. Kavvos ABSTRACT. The EckmannHilton argument shows that any two monoid structures on the same set satisfying the interchange law are in fact the same operation, which is moreover commutative. When the monoids correspond to the vertical and horizontal composition of a sufficiently higherdimensional category, the EckmannHilton argument itself appears as a higher cell. This cell is often required to satisfy an additional piece of coherence, which is known as the syllepsis. We show that the syllepsis can be constructed from the elimination rule of intensional identity types in MartinLoef type theory. 
17:00  Greatest HITs: Higher inductive types in coinductive definitions via induction under clocks PRESENTER: Rasmus Ejlers Møgelberg ABSTRACT. We present Clocked Cubical Type Theory, the first type theory combining multiclocked guarded recursion with the features of Cubical Type Theory. Guarded recursion is an abstract form of stepindexing, which can be used for construction of advanced programming language models. In its multiclocked version, it can also be used for coinductive programming and reasoning, encoding productivity in types. Combining this with Higher Inductive Types (HITs) this extends to coinductive types that are traditionally hard to represent in type theory, such as the type of finitely branching labelled transition systems. Among our technical contributions is a new principle of induction under clocks, providing computational contents to one of the main axioms required for encoding coinductive types. This principle is verified using a denotational semantics in a presheaf model. 
17:15  PRESENTER: Alex Rice ABSTRACT. We use typetheoretic techniques to present an algebraic theory of oocategories with strict units. Starting with a known typetheoretic presentation of fully weak oocategories, in which terms denote valid operations, we extend the theory with a nontrivial definitional equality. This forces some operations to coincide strictly in any model, yielding the strict unit behaviour. We make a detailed investigation of the metatheoretic properties of this theory. We give a reduction relation that generates definitional equality, and prove that it is confluent and terminating, thus yielding the first decision procedure for equality in a strictlyunital setting. Moreover, we show that our definitional equality relation identifies all terms in a disc context, providing a point comparison with a previously proposed definition of strictly unital oocategory. We also prove a conservativity result, showing that every operation of the strictly unital theory indeed arises from a valid operation in the fully weak theory. From this, we infer that strict unitality is a property of an oocategory rather than additional structure. 
16:00  PRESENTER: Christoph Jabs ABSTRACT. We explore a MaxSATbased approach to biobjective optimization. Biobjective optimization refers to the task of finding socalled paretooptimal solutions in terms of two objective functions. Biobjective optimization problems naturally arise in various realworld settings. For example, in the context of learning interpretable representations, such as decision rules, from data, one wishes to balance between two objectives, the classification error and the size of the representation. Our approach is generally applicable to biobjective optimizations which allow for propositional encodings. The approach makes heavy use of incremental SAT and draws inspiration from modern MaxSAT solving approaches. In particular, we describe several variants of the approach which arise from different approaches to MaxSAT solving. In addition to computing a single representative solution per each point of the pareto front, the approach allows for enumerating all paretooptimal solutions. We empirically compare the efficiency of the approach to recent competing approaches, showing the practical benefits of our approach in the contexts of learning interpretable classification rules and biobjective set covering. 
16:30  PRESENTER: Andreas Niskanen ABSTRACT. Boolean satisfiability (SAT) solvers allow for incremental computations, which is key to efficient employment of SAT solvers iteratively for developing complex decision and optimization procedures, including maximum satisfiability (MaxSAT) solvers. However, enabling incremental computations on the level of constraint optimization remains a noticeable challenge. While incremental computations have been identified to have great potential in speeding up MaxSATbased approaches for solving various realworld optimization problems, enabling incremental computations in MaxSAT remains to most extent unexplored. In this work, we contribute towards making incremental MaxSAT solving a reality. Firstly, building on the IPASIR interface for incremental SAT solving, we propose the IPAMIR interface for implementing incremental MaxSAT solvers and for developing applications making use of incremental MaxSAT. Secondly, we expand our recent adaptation of the implicit hitting set based MaxHS MaxSAT solver to a fullyfledged incremental MaxSAT solver in terms of implementing the IPAMIR specification in full, and detail in particular how, in addition to weight changes, assumptions are enabled without losing incrementality. Thirdly, we provide further empirical evidence on the benefits of incremental MaxSAT solving under assumptions. 
17:00  Analysis of CoreGuided MaxSAT Using Cores and Correction Sets PRESENTER: Nina Narodytska ABSTRACT. Coreguided solvers are among best performing algorithms for solving maximum satisfiability problems. These solvers perform a sequence of relaxations of the formula to increase the lower bound on the optimal solution on each relaxation step. In addition, the relaxations allow generating a large set of minimal cores (MUSes) of the original formula. However, properties of these cores have not been investigated. In contrast, maximum hitting setbased (maxhs) solvers also extract a set of cores that posses a set of properties, e.g. the size of the minimal hitting set of the discovered cores equals the optimum when a maxhs solver terminates. In this work, we analyze minimal cores and minimum correction sets (MinCSes) of the input formula and its subformulas that coreguided solvers produce during the execution. We demonstrate that a set of MUSes that a coreguided algorithm discovers posses the same key properties as cores extracted by maxhs solvers. For instance, we prove the size of a minimum hitting set of these the discovered cores equals to the optimal cost. We also show that it discovers all MinCSes of special subformulas of the input formula. We discuss theoretical and practical implications of our results. 