View: session overviewtalk overviewside by side with other conferences
09:00  PRESENTER: Laurent Michel ABSTRACT. Multivalued decision diagrams (MDDs) have recently emerged as an effective technology for solving combinatorial optimization problems. In this tutorial, we focus on the use of decision diagrams in the context of constraint programming. It consists of two parts: 1) an overview of MDDbased constraint programming, and 2) a description and demonstration of the Haddock system to specify and compile MDDpropagators within a constraint programming solver. MDDbased constraint propagation was first introduced by Andersen et al. in 2007. They introduced the concept of relaxed decision diagrams as a mechanism to enhance the propagation of information between constraints: Instead of using a domain store, they proposed using an ‘MDD store’ to represent and communicate more structural relationships between variables. As a consequence, the search tree size can be dramatically reduced. Since then, many papers have further studied the use of decision diagrams in constraint programming, as well as in other related areas such as mathematical programming. The first part of this tutorial will describe the core concepts of MDDbased constraint propagation and provide examples on global constraints such as AllDifferent, Among, and Disjunctive. In the second part, we will describe the system Haddock. It provides a specification language and implementation architecture for automatic decision diagram compilation. Haddock provides the rules for refining (splitting) and filtering (propagating) MDD abstractions. In addition, the language allows to specify heuristics to guide the compilation process. Haddock allows the user to declare multiple MDDs within a CP model, each associated with a suitable set of constraints, and automatically integrates the MDD propagators into the constraint propagation fixpoint computation. We will give an overview of the Haddock system, provide examples on constraints such as Among, Gcc, and (weighted) Sum, and describe example applications. Participants are invited to download the Haddock code to work on handson examples during the tutorial.

09:00  On Quantitative Testing of Samplers PRESENTER: Mate Soos ABSTRACT. The problem of uniform sampling is, given a formula F , sample solutions of F uniformly at random from the solution space of F . Uniform sampling is a fundamental problem with widespread applications, including configuration testing, bug synthesis, function synthesis, and many more. Stateoftheart approaches for uniform sampling have a tradeoff between scalability and theoretical guarantees. Many state of the art uniform samplers do not provide any theoretical guarantees on the distribution of samples generated, however, empirically they have shown promising results. In such cases, the main challenge is to test whether the distribution according to which samples are generated is indeed uniform or not. Recently, Chakraborty and Meel (2019) designed the first scalable sampling tester, Barbarik, based on a greybox sampling technique for testing if the distribution, according to which the given sampler is sampling, is close to the uniform or far from uniform. They were able to show that many offtheself samplers are far from a uniform sampler. The availability of Barbarik increased the testdriven development of samplers. More recently, Golia, Soos, Chakraborty and Meel (2021), designed a uniform like sampler, CMSGen, which was shown to be accepted by Barbarik on all the instances. However, CMSGen does not provide any theoretical analysis of the sampling quality. CMSGen leads us to observe the need for a tester to provide a qualitative answer to determine the quality of underlying samplers instead of merely a quantitative answer of Accept or Reject. Towards this goal, we design a computational hardnessbased tester ScalBarbarik that provides a more nuanced analysis of the quality of a sampler. ScalBarbarik allows more expressive measurement of the quality of the underlying samplers. We empirically show that the stateoftheart sampler, CMSGen is not accepted as a uniformlike sampler by ScalBarbarik. Furthermore, we show that ScalBarbarik can be used to design a sampler that can achieve balance between scalability and uniformity. 
09:30  On the Enumeration of Frequent High Utility Itemsets: A symbolic AI Approach PRESENTER: Said Jabbour ABSTRACT. Mining interesting patterns from data is a core part of the data mining world. High utility mining, an active research topic in data mining, aims to discover valuable itemsets with high profit (e.g., cost, risk). However, the measure of interest of an itemset must primarily reflect not only the importance of items in terms of profit, but also their occurrence in data in order to make more crucial decisions. Some proposals are then introduced to deal with the problem of computing high utility itemsets that meet a minimum support threshold. However, in these existing proposals, all transactions in which the itemset appears are taken into account, including those in which the itemset has a low profit. So, no additional information about the overall utility of the itemset is taken into account. This paper addresses this issue by introducing a SATbased model to efficiently find the set of all frequent high utility itemsets with the use of a minimum utility threshold applied to each transaction in which the itemset appears. More specifically, we reduce the problem of mining frequent high utility itemsets to the one of enumerating the models of a formula in propositional logic, and then we use stateoftheart SAT solvers to solve it. Afterwards, to make our approach more efficient, we provide a decomposition technique that is particularly suitable for deriving smaller and independent subproblems easy to resolve. Finally, an extensive experimental evaluation on various popular realworld datasets shows that our method is fast and scale well compared to the stateofthe art algorithms. 
10:00  From CrossingFree Resolution to MaxSAT Resolution PRESENTER: Mohamed Sami Cherif ABSTRACT. Adapting a resolution proof for SAT into a MaxSAT resolution proof without considerably increasing the size of the proof is an open problem. Readonce resolution, where each clause is used at most once in the proof, represents the only fragment of resolution for which an adaptation using exclusively MaxSAT resolution is known and trivial. Proofs containing non readonce clauses are difficult to adapt because the MaxSAT resolution rule replaces the premises by the conclusions. This paper contributes to this open problem by defining, for the first time since the introduction of MaxSAT resolution, a new fragment of resolution whose proofs can be adapted to MaxSAT resolution proofs without substantially increasing their size. In this fragment, called crossingfree resolution, non readonce clauses are used independently to infer new information thus enabling to bring along each non readonce clause while unfolding the proof until a substitute is required. 
11:00  PRESENTER: Rebecca Gentzel ABSTRACT. Haddock, introduced in [11], is a declarative language and architecture for the specification and the implementation of Multivalued decision diagrams. It relies on a labeled transition system to specify and compose individual constraints into a propagator with filtering capabilities that automatically deliver the expected level of filtering. Yet, the operational potency of the filtering algorithms strongly correlate with heuristics for carrying out refinements of the diagrams. This paper considers how to empower Haddock users with the ability to unintrusively specify various such heuristics and derive the computational benefits of exerting finegrained control over the refinement process. 
11:30  CNF Encodings of Binary Constraint Trees PRESENTER: Ruiwei Wang ABSTRACT. Ordered Multivalued Decision Diagrams (MDDs) have been shown to be useful to represent finite domain functions/relations. For example, various constraints can be modelled with MDD constraints. Recently, a new representation called Binary Constraint Tree (BCT), which is a (special) tree structure binary Constraint Satisfaction Problem, has been proposed to encode MDDs and shown to outperform existing MDD constraint propagators in Constraint Programming solvers. BCT is a compact representation, and it can be exponentially smaller than MDD for representing some constraints. Here, we also show that BCT is compact for representing nondeterministic finite state automaton (NFA) constraints. In this paper, we investigate how to encode BCT into CNF form, making it suitable for SAT solvers. We present and investigate five BCT CNF encodings. We compare the propagation strength of the BCT CNF encodings and experimentally evaluate the encodings on a range of existing benchmarks. We also compare with seven existing CNF encodings of MDD constraints. Experimental results show that the CNF encodings of BCT constraints can outperform those of MDD constraints on various benchmarks. 
12:00  PRESENTER: Nicolas Schmidt ABSTRACT. In order to reduce the size of compiled forms in knowledge compilation, we propose a new approach based on a splitting of the main representation into a nucleus representation and satellites representations. Nucleus representation is the projection of the original representation onto the ``main'' variables and satellite representations define the other variables according to the nucleus. We propose a language and a method, aimed at OBDD/OMDD representations, to compile into this split form. Our experimental study shows major size reductions on configuration and diagnosis oriented benchmarks. 
11:00  Learning Constraint Programming Models from Data using GenerateandAggregate PRESENTER: Samuel Kolb ABSTRACT. Constraint programming (CP) is used widely for solving realworld problems. However, designing these models require substantial expertise. In this paper, we tackle this problem by synthesising models automatically from past solutions. We introduce COUNTCP, which uses simple grammars and a generateandaggregate approach to learn expressive firstorder constraints typically used in CP as well as their parameters from data. The learned constraints generalise across instances over different sizes and can be used to solve unseen instances  e.g., learning constraints from a 4 X 4 Sudoku to solve a 9 X 9 Sudoku or learning nurse staffing requirements across hospitals. COUNTCP is implemented using the CPMpy constraint programming and modelling environment to produce constraints with nested mathematical expressions. The method is empirically evaluated on a set of suitable benchmark problems and shows to learn accurate and compact models quickly. 
11:30  Constraint Acquisition Based on Solution Counting PRESENTER: Christopher Coulombe ABSTRACT. We propose CABSC, a system that performs Constraint Acquisition Based on Solution Counting. In order to learn a Constraint Satisfaction Problem (CSP), the user provides positive examples and a MetaCSP, i.e. a model of a combinatorial problem whose solution is a CSP. This MetaCSP allows listing the potential constraints that can be part of the CSP the user wants to learn. It also allows stating the parameters of the constraints, such as the coefficients of a linear equation, and imposing constraints over these parameters. The CABSC reads the MetaCSP using an augmented version of the language MiniZinc and returns the CSP that accepts the fewest solutions among the CSPs accepting all positive examples. This is done using a branch and bound where the bounding mechanism makes use of a model counter. Experiments show that CABSC is successful at learning constraints and their parameters from positive examples. 
12:00  PRESENTER: Jovial CheukamNgouonou ABSTRACT. To automate the discovery of conjectures on combinatorial objects, we introduce the concept of a map of sharp bounds on characteristics of combinatorial objects, that provides a set of interrelated sharp bounds for these combinatorial objects. We then describe a Bound Seeker, a CPbased system, that gradually acquires maps of conjectures. The system was tested for searching conjectures on bounds on characteristics of digraphs: it constructs sixteen maps involving 431 conjectures on sharp lower and upperbounds on eight digraph characteristics. 
Lunch will be held in Taub lobby (CP, LICS, ICLP) and in The Grand Water Research Institute (KR, FSCD, SAT).
14:0015:00: Tutorial
15:0015:30: Benchmarking
14:00  Formal Explainable AI ABSTRACT. Explainable artificial intelligence (XAI) represents arguably one of the most crucial challenges being faced by the area of AI these days. Although the majority of approaches to XAI are of heuristic nature, recent work proposed the use of abductive reasoning to computing provably correct explanations for machine learning (ML) predictions. The proposed rigorous approach was shown to be useful not only for computing trustable explanations but also for reasoning about explanations computed heuristically. It was also applied to uncover a close relationship between XAI and verification of ML models. This talk will overview the advances of the formal approach to XAI as well as the use of reasoning in devising interpretable rulebased ML models including decision trees, decision sets, and decision lists. 
15:00  PRESENTER: Nguyen Dang ABSTRACT. Benchmarking is an important tool for assessing the relative performance of alternative solving approaches. However, the utility of benchmarking is limited by the quantity and quality of the available problem instances. Modern constraint programming languages typically allow the specification of a classlevel model that is parameterised over instance data. This separation presents an opportunity for automated approaches to generate instance data that define instances that are graded (solvable at a certain difficulty level for a solver) or can discriminate between two solving approaches. In this paper, we introduce a framework that combines these two properties to generate a large number of benchmark instances, purposely generated for effective and informative benchmarking. We use five problem classes that were used in the MiniZinc competition to demonstrate the usage of our framework. In addition to producing a ranking among solvers, our framework gives a broader understanding of the behaviour of each solver for the whole instance space; for example by finding subsets of instances where the solver performance significantly varies from its average performance. 
14:0015:00: Planning
15:0015:30: DCOP
14:00  Isomorphisms between STRIPS problems and subproblems PRESENTER: Arnaud Lequen ABSTRACT. Determining whether two STRIPS planning instances are isomorphic is the simplest form of comparison between planning instances. It is also a particular case of the problem concerned with finding an isomorphism between a planning instance P and a subinstance of another instance P'. One application of such an isomorphism is to efficiently produce a compiled form containing all solutions to P from a compiled form containing all solutions to P'. In this paper, we study the complexity of both problems. We show that the former is GIcomplete, and can thus be solved, in theory, in quasipolynomial time. While we prove the latter to be NPcomplete, we propose an algorithm to build an isomorphism, when possible. We report extensive experimental trials on benchmark problems which demonstrate conclusively that applying constraint propagation in preprocessing can greatly improve the efficiency of a SAT solver. 
14:30  PRESENTER: Joan Espasa Arxer ABSTRACT. In this work we focus on a planning problem based on Plotting, a tilematching puzzle video game published by Taito in 1989. The objective of the game is to remove at least a certain number of coloured blocks from a grid by sequentially shooting blocks into the same grid. The interest and the difficulty of Plotting is due to the complex transitions after every shot: various blocks are affected directly, while others can be indirectly affected by gravity. We highlight the difficulties and inefficiencies of modelling and solving Plotting using PDDL, the defacto standard language for AI planners. We also provide two constraint models that are able to capture the inherent complexities of the problem. In addition, we provide a set of benchmark instances, an instance generator and an extensive experimental comparison demonstrating solving performance with SAT, CP, MIP and a stateoftheart AI planner. 
15:00  Completeness Matters: Towards Efficient Caching in Treebased Synchronous Backtracking Search for DCOPs PRESENTER: Jie Wang ABSTRACT. Treebased backtracking search is an important technique to solve Distributed Constraint optimization Problems (DCOPs), where agents cooperatively exhaust the search space by branching on each variable to divide subproblems and reporting the results to their parent after solving each subproblem. Therefore, effectively reusing the historical search results can avoid unnecessary resolutions and substantially reduce the overall overhead. However, the existing caching schemes for asynchronous algorithms cannot be applied directly to synchronous ones, in the sense that child agent reports the lower and upper bound rather than the precise cost of exploration. In addition, the existing caching scheme for synchronous algorithms has the shortcomings of incompleteness and low cache utilization. Therefore, we propose a new caching scheme for treebased synchronous backtracking search, named Retention Scheme (RS). It utilizes the upper bounds of subproblems which avoid the reuse of suboptimal solutions to ensure the completeness, and deploys a finegrained cache information unit targeted at each child agent to improve the cachehit rate. Furthermore, we introduce two new cache replacement schemes to further improve performance when the memory is limited. Finally, we theoretically prove the completeness of our method and empirically show its superiority. 