FLOC 2022: FEDERATED LOGIC CONFERENCE 2022
CP ON THURSDAY, AUGUST 4TH
Days:
previous day
next day
all days

View: session overviewtalk overviewside by side with other conferences

08:30-09:00Coffee & Refreshments
09:00-10:30 Session 63A: Tutorial
Location: Taub 7
09:00
MDD-Based Constraint Programming in Haddock
PRESENTER: Laurent Michel

ABSTRACT. Multi-valued decision diagrams (MDDs) have recently emerged as an effective technology for solving combinatorial optimization problems. In this tutorial, we focus on the use of decision diagrams in the context of constraint programming. It consists of two parts: 1) an overview of MDD-based constraint programming, and 2) a description and demonstration of the Haddock system to specify and compile MDD-propagators within a constraint programming solver.

MDD-based constraint propagation was first introduced by Andersen et al. in 2007. They introduced the concept of relaxed decision diagrams as a mechanism to enhance the propagation of information between constraints: Instead of using a domain store, they proposed using an ‘MDD store’ to represent and communicate more structural relationships between variables. As a consequence, the search tree size can be dramatically reduced. Since then, many papers have further studied the use of decision diagrams in constraint programming, as well as in other related areas such as mathematical programming. The first part of this tutorial will describe the core concepts of MDD-based constraint propagation and provide examples on global constraints such as AllDifferent, Among, and Disjunctive.

In the second part, we will describe the system Haddock. It provides a specification language and implementation architecture for automatic decision diagram compilation. Haddock provides the rules for refining (splitting) and filtering (propagating) MDD abstractions. In addition, the language allows to specify heuristics to guide the compilation process. Haddock allows the user to declare multiple MDDs within a CP model, each associated with a suitable set of constraints, and automatically integrates the MDD propagators into the constraint propagation fixpoint computation. We will give an overview of the Haddock system, provide examples on constraints such as Among, Gcc, and (weighted) Sum, and describe example applications. Participants are invited to download the Haddock code to work on hands-on examples during the tutorial.

 

09:00-10:30 Session 63B: SAT
Location: Taub 4
09:00
On Quantitative Testing of Samplers
PRESENTER: Mate Soos

ABSTRACT. The problem of uniform sampling is, given a formula F , sample solutions of F uniformly at random from the solution space of F . Uniform sampling is a fundamental problem with widespread applications, including configuration testing, bug synthesis, function synthesis, and many more. State-of-the-art approaches for uniform sampling have a trade-off between scalability and theoretical guarantees. Many state of the art uniform samplers do not provide any theoretical guarantees on the distribution of samples generated, however, empirically they have shown promising results. In such cases, the main challenge is to test whether the distribution according to which samples are generated is indeed uniform or not.

Recently, Chakraborty and Meel (2019) designed the first scalable sampling tester, Barbarik, based on a grey-box sampling technique for testing if the distribution, according to which the given sampler is sampling, is close to the uniform or far from uniform. They were able to show that many off-the-self samplers are far from a uniform sampler. The availability of Barbarik increased the test-driven development of samplers. More recently, Golia, Soos, Chakraborty and Meel (2021), designed a uniform like sampler, CMSGen, which was shown to be accepted by Barbarik on all the instances. However, CMSGen does not provide any theoretical analysis of the sampling quality.

CMSGen leads us to observe the need for a tester to provide a qualitative answer to determine the quality of underlying samplers instead of merely a quantitative answer of Accept or Reject. Towards this goal, we design a computational hardness-based tester ScalBarbarik that provides a more nuanced analysis of the quality of a sampler. ScalBarbarik allows more expressive measurement of the quality of the underlying samplers. We empirically show that the state-of-the-art sampler, CMSGen is not accepted as a uniform-like sampler by ScalBarbarik. Furthermore, we show that ScalBarbarik can be used to design a sampler that can achieve balance between scalability and uniformity.

09:30
On the Enumeration of Frequent High Utility Itemsets: A symbolic AI Approach
PRESENTER: Said Jabbour

ABSTRACT. Mining interesting patterns from data is a core part of the data mining world. High utility mining, an active research topic in data mining, aims to discover valuable itemsets with high profit (e.g., cost, risk). However, the measure of interest of an itemset must primarily reflect not only the importance of items in terms of profit, but also their occurrence in data in order to make more crucial decisions. Some proposals are then introduced to deal with the problem of computing high utility itemsets that meet a minimum support threshold. However, in these existing proposals, all transactions in which the itemset appears are taken into account, including those in which the itemset has a low profit. So, no additional information about the overall utility of the itemset is taken into account. This paper addresses this issue by introducing a SAT-based model to efficiently find the set of all frequent high utility itemsets with the use of a minimum utility threshold applied to each transaction in which the itemset appears. More specifically, we reduce the problem of mining frequent high utility itemsets to the one of enumerating the models of a formula in propositional logic, and then we use state-of-the-art SAT solvers to solve it. Afterwards, to make our approach more efficient, we provide a decomposition technique that is particularly suitable for deriving smaller and independent sub-problems easy to resolve. Finally, an extensive experimental evaluation on various popular real-world datasets shows that our method is fast and scale well compared to the state-of-the art algorithms.

10:00
From Crossing-Free Resolution to Max-SAT Resolution

ABSTRACT. Adapting a resolution proof for SAT into a Max-SAT resolution proof without considerably increasing the size of the proof is an open problem. Read-once resolution, where each clause is used at most once in the proof, represents the only fragment of resolution for which an adaptation using exclusively Max-SAT resolution is known and trivial. Proofs containing non read-once clauses are difficult to adapt because the Max-SAT resolution rule replaces the premises by the conclusions. This paper contributes to this open problem by defining, for the first time since the introduction of Max-SAT resolution, a new fragment of resolution whose proofs can be adapted to Max-SAT resolution proofs without substantially increasing their size. In this fragment, called crossing-free resolution, non read-once clauses are used independently to infer new information thus enabling to bring along each non read-once clause while unfolding the proof until a substitute is required.

10:30-11:00Coffee Break
11:00-12:30 Session 65A: MDD
Location: Taub 7
11:00
Heuristics for MDD Propagation in Haddock
PRESENTER: Rebecca Gentzel

ABSTRACT. Haddock, introduced in [11], is a declarative language and architecture for the specification and the implementation of Multi-valued decision diagrams. It relies on a labeled transition system to specify and compose individual constraints into a propagator with filtering capabilities that automatically deliver the expected level of filtering. Yet, the operational potency of the filtering algorithms strongly correlate with heuristics for carrying out refinements of the diagrams. This paper considers how to empower Haddock users with the ability to unintrusively specify various such heuristics and derive the computational benefits of exerting fine-grained control over the refinement process.

11:30
CNF Encodings of Binary Constraint Trees
PRESENTER: Ruiwei Wang

ABSTRACT. Ordered Multi-valued Decision Diagrams (MDDs) have been shown to be useful to represent finite domain functions/relations. For example, various constraints can be modelled with MDD constraints. Recently, a new representation called Binary Constraint Tree (BCT), which is a (special) tree structure binary Constraint Satisfaction Problem, has been proposed to encode MDDs and shown to outperform existing MDD constraint propagators in Constraint Programming solvers. BCT is a compact representation, and it can be exponentially smaller than MDD for representing some constraints. Here, we also show that BCT is compact for representing non-deterministic finite state automaton (NFA) constraints. In this paper, we investigate how to encode BCT into CNF form, making it suitable for SAT solvers. We present and investigate five BCT CNF encodings. We compare the propagation strength of the BCT CNF encodings and experimentally evaluate the encodings on a range of existing benchmarks. We also compare with seven existing CNF encodings of MDD constraints. Experimental results show that the CNF encodings of BCT constraints can outperform those of MDD constraints on various benchmarks.

12:00
Nucleus-Satellites Systems of OMDDs for Reducing the Size of Compiled Forms
PRESENTER: Nicolas Schmidt

ABSTRACT. In order to reduce the size of compiled forms in knowledge compilation, we propose a new approach based on a splitting of the main representation into a nucleus representation and satellites representations. Nucleus representation is the projection of the original representation onto the ``main'' variables and satellite representations define the other variables according to the nucleus. We propose a language and a method, aimed at OBDD/OMDD representations, to compile into this split form. Our experimental study shows major size reductions on configuration- and diagnosis- oriented benchmarks.

11:00-12:30 Session 65B: Constraint and Conjecture Acquisition
Location: Taub 4
11:00
Learning Constraint Programming Models from Data using Generate-and-Aggregate
PRESENTER: Samuel Kolb

ABSTRACT. Constraint programming (CP) is used widely for solving real-world problems. However, designing these models require substantial expertise. In this paper, we tackle this problem by synthesising models automatically from past solutions. We introduce COUNT-CP, which uses simple grammars and a generate-and-aggregate approach to learn expressive first-order constraints typically used in CP as well as their parameters from data. The learned constraints generalise across instances over different sizes and can be used to solve unseen instances -- e.g., learning constraints from a 4 X 4 Sudoku to solve a 9 X 9 Sudoku or learning nurse staffing requirements across hospitals. COUNT-CP is implemented using the CPMpy constraint programming and modelling environment to produce constraints with nested mathematical expressions. The method is empirically evaluated on a set of suitable benchmark problems and shows to learn accurate and compact models quickly.

11:30
Constraint Acquisition Based on Solution Counting

ABSTRACT. We propose CABSC, a system that performs Constraint Acquisition Based on Solution Counting. In order to learn a Constraint Satisfaction Problem (CSP), the user provides positive examples and a Meta-CSP, i.e. a model of a combinatorial problem whose solution is a CSP. This Meta-CSP allows listing the potential constraints that can be part of the CSP the user wants to learn. It also allows stating the parameters of the constraints, such as the coefficients of a linear equation, and imposing constraints over these parameters. The CABSC reads the Meta-CSP using an augmented version of the language MiniZinc and returns the CSP that accepts the fewest solutions among the CSPs accepting all positive examples. This is done using a branch and bound where the bounding mechanism makes use of a model counter. Experiments show that CABSC is successful at learning constraints and their parameters from positive examples.

12:00
Acquiring Maps of Interrelated Conjectures on Sharp Bounds

ABSTRACT. To automate the discovery of conjectures on combinatorial objects, we introduce the concept of a map of sharp bounds on characteristics of combinatorial objects, that provides a set of interrelated sharp bounds for these combinatorial objects. We then describe a Bound Seeker, a CP-based system, that gradually acquires maps of conjectures. The system was tested for searching conjectures on bounds on characteristics of digraphs: it constructs sixteen maps involving 431 conjectures on sharp lower and upper-bounds on eight digraph characteristics.

12:30-14:00Lunch Break

Lunch will be held in Taub lobby (CP, LICS, ICLP) and in The Grand Water Research Institute (KR, FSCD, SAT).

14:00-15:30 Session 67A: Tutorial + Benchmarking

14:00-15:00: Tutorial

15:00-15:30: Benchmarking

Location: Taub 7
14:00
Formal Explainable AI

ABSTRACT. Explainable artificial intelligence (XAI) represents arguably one of the most crucial challenges being faced by the area of AI these days. Although the majority of approaches to XAI are of heuristic nature, recent work proposed the use of abductive reasoning to computing provably correct explanations for machine learning (ML) predictions. The proposed rigorous approach was shown to be useful not only for computing trustable explanations but also for reasoning about explanations computed heuristically. It was also applied to uncover a close relationship between XAI and verification of ML models. This talk will overview the advances of the formal approach to XAI as well as the use of reasoning in devising interpretable rule-based ML models including decision trees, decision sets, and decision lists.

15:00
A framework for generating informative benchmark instances
PRESENTER: Nguyen Dang

ABSTRACT. Benchmarking is an important tool for assessing the relative performance of alternative solving approaches. However, the utility of benchmarking is limited by the quantity and quality of the available problem instances. Modern constraint programming languages typically allow the specification of a class-level model that is parameterised over instance data. This separation presents an opportunity for automated approaches to generate instance data that define instances that are graded (solvable at a certain difficulty level for a solver) or can discriminate between two solving approaches. In this paper, we introduce a framework that combines these two properties to generate a large number of benchmark instances, purposely generated for effective and informative benchmarking. We use five problem classes that were used in the MiniZinc competition to demonstrate the usage of our framework. In addition to producing a ranking among solvers, our framework gives a broader understanding of the behaviour of each solver for the whole instance space; for example by finding subsets of instances where the solver performance significantly varies from its average performance.

14:00-15:30 Session 67B: Planning + DCOP

14:00-15:00: Planning

15:00-15:30: DCOP

Location: Taub 4
14:00
Isomorphisms between STRIPS problems and sub-problems
PRESENTER: Arnaud Lequen

ABSTRACT. Determining whether two STRIPS planning instances are isomorphic is the simplest form of comparison between planning instances. It is also a particular case of the problem concerned with finding an isomorphism between a planning instance P and a sub-instance of another instance P'. One application of such an isomorphism is to efficiently produce a compiled form containing all solutions to P from a compiled form containing all solutions to P'. In this paper, we study the complexity of both problems. We show that the former is GI-complete, and can thus be solved, in theory, in quasi-polynomial time. While we prove the latter to be NP-complete, we propose an algorithm to build an isomorphism, when possible. We report extensive experimental trials on benchmark problems which demonstrate conclusively that applying constraint propagation in preprocessing can greatly improve the efficiency of a SAT solver.

14:30
Plotting: A Planning Problem With Complex Transitions

ABSTRACT. In this work we focus on a planning problem based on Plotting, a tile-matching puzzle video game published by Taito in 1989. The objective of the game is to remove at least a certain number of coloured blocks from a grid by sequentially shooting blocks into the same grid. The interest and the difficulty of Plotting is due to the complex transitions after every shot: various blocks are affected directly, while others can be indirectly affected by gravity. We highlight the difficulties and inefficiencies of modelling and solving Plotting using PDDL, the de-facto standard language for AI planners. We also provide two constraint models that are able to capture the inherent complexities of the problem. In addition, we provide a set of benchmark instances, an instance generator and an extensive experimental comparison demonstrating solving performance with SAT, CP, MIP and a state-of-the-art AI planner.

15:00
Completeness Matters: Towards Efficient Caching in Tree-based Synchronous Backtracking Search for DCOPs
PRESENTER: Jie Wang

ABSTRACT. Tree-based backtracking search is an important technique to solve Distributed Constraint optimization Problems (DCOPs), where agents cooperatively exhaust the search space by branching on each variable to divide subproblems and reporting the results to their parent after solving each subproblem. Therefore, effectively reusing the historical search results can avoid unnecessary resolutions and substantially reduce the overall overhead. However, the existing caching schemes for asynchronous algorithms cannot be applied directly to synchronous ones, in the sense that child agent reports the lower and upper bound rather than the precise cost of exploration. In addition, the existing caching scheme for synchronous algorithms has the shortcomings of incompleteness and low cache utilization. Therefore, we propose a new caching scheme for tree-based synchronous backtracking search, named Retention Scheme (RS). It utilizes the upper bounds of subproblems which avoid the reuse of suboptimal solutions to ensure the completeness, and deploys a fine-grained cache information unit targeted at each child agent to improve the cache-hit rate. Furthermore, we introduce two new cache replacement schemes to further improve performance when the memory is limited. Finally, we theoretically prove the completeness of our method and empirically show its superiority.

15:30-16:00Coffee Break
16:00-17:00 Session 70: Plenary
16:00
Complexity Measures for Reactive Systems

ABSTRACT. A reactive system maintains an on-goint interaction with its environment. At each moment in time, the system receives from the environment an assignment to the input signals and responds with an assignment to the output signals. The system is correct with respect to a given specification if, for all environments, the infinite computation that is generated by the interaction between the system and the environment satisfies the specification. Reactive systems are modeled by transducers: finite state machines whose transitions are labeled by assignments to the input signals and whose states are labeled by assignments to the output signals. In the synthesis problem, we automatically transform a given specification into a correct transducer.

While complexity measures receive much attention in the design of algorithms, they are less explored in the context of synthesis. This is a serious drawback: just as we cannot imagine an algorithm designer that accepts a correct algorithm without checking its complexity, we cannot expect designers of reactive systems to accept synthesized systems that are only guaranteed to be correct. It is not clear, however, how to define the complexity of a transducer. Unlike the case of algorithms (or Turing machines in general), there is no "size of input" to relate to, and measures like time and space complexity are irrelevant. Indeed, we care for on-going interactions, along which the transducer reacts instantly according to its transition function. One immediate complexity measure for a transducer is its size, but many more complexity measures are of interest. The talk discusses such measures and describes how the search for optimal reactive systems affects the synthesis problem.