Constraint modelling and solving: Learning from observing people
ABSTRACT. Research in constraint programming typically focuses on problem-solving efficiency. However, the way users conceptualise problems and communicate with constraint programming tools is often sidelined.How humans think about constraint problems can be important for the development of efficient tools and representations that are useful to a broader audience. Although many visual programming languages exist, as an alternate for textual representation, for procedural languages, visual encoding of problem specifications has not received much attention. Thus, visualization could provide an alternative way to model and understand such problems. We present an initial step towards better understanding of the human side of the constraint modelling and solving process. We executed a study that catalogs how people approach three parts of constraint programming (1) the problem representation, (2) the problem-solving, and (3) the programming of a solver. To our knowledge, this is the first human-centred study addressing how people approach constraint modelling and solving.We studied three groups with different expertise: non-computer scientists, computer scientists and constraint programmers and analyzed their marks on paper (e.g., arrows), gestures (e.g., pointing), the mappings to problem concepts (e.g., containers, sets) and any strategies and explanations that they provided. We will discuss results and future research this study will hopefully inspire.
Solving XCSP3 constraint problems using tools from software verification
ABSTRACT. Since 2012, the Software Verification Competition (SV-COMP) has evaluated the performance of a wide range of software verification tools, using a wide range of techniques and representations. These include SAT/SMT solving, abstract interpretation, Constrained Horn Clause solving and finite state automata.
The XCSP3 constraint format serves as an intermediate language for constraint problems. When trying to solve a particular problem, one often wishes to trial a range of techniques to see which one is initially most promising. Unfortunately, there might not always be an implementation of the desired technique for one's formulation.
We propose to address this problem by encoding XCSP3 problems as C program software verification problems. This grants the ability to trial all techniques implemented in SV-COMP entries. We should not expect the encoding to be efficient, but it may be sufficient to identify the most promising techniques.
ABSTRACT. This paper describes a constraint-based part-of-speech (POS) tagger, named CPOST, which treats POS tagging as a constraint satisfaction problem (CSP). CPOST treats each word as a variable, uses a lexicon to determine the domains of variables, employs context constraints to reduce ambiguity, and utilizes statistical models to label variables with values. This paper shows that, with a small number of context constraints that encode some of the basic linguistic knowledge, CPOST significantly enhances the precision at identifying base-form verbs, and mitigates the burden on syntax parsing.
A portfolio-based analysis method for competition results
ABSTRACT. Competitions such as the MiniZinc Challenges or the SAT competitions have been very useful sources for comparing performance of different solving approaches and for advancing the state-of-the-arts of the fields. Traditional competition setting often focuses on producing a ranking between solvers based on their average performance across a wide range of benchmark problems and instances. While this is a sensible way to assess the relative performance of solvers, such ranking does not necessarily reflect the full potential of a solver, especially when we want to utilise a portfolio of solvers instead of a single one for solving a new problem. In this paper, I will describe a portfolio-based analysis method which can give complementary insights into the performance of participating solvers in a competition. The method is demonstrated on the results of the MiniZinc Challenges and new insights gained from the portfolio viewpoint are presented.
ABSTRACT. We build on a recently proposed method for step-wise explaining solutions of constraint satisfaction problems.
An explanation here is a sequence of simple inference steps, where the simplicity of an inference step is measured by the number and types of constraints and facts used, and where the sequence explains all logical consequences of the problem.
The explanation-generation algorithms presented in that work rely heavily on calls for Minimal Unsatisfiable Subsets (MUS) of a derived unsatisfiable formula, exploiting a one-to-one correspondence between so-called non-redundant explanations and MUSs.
The generation of an explanation sequence for a logic grid puzzle ranges from a few minutes to a few hours, which is more than a human would take.
Therefore, we build on these formal foundations and tackle the main points of improvement, namely how to efficiently generate explanations that are provably optimal (with respect to the given cost metric).
To address this issue, we develop
(1) an algorithm based on the hitting set duality for finding optimal constrained unsatisfiable subsets, where `constrainedness' reduces the multiple calls for (optimal) unsatisfiable subsets to a single call;
(2) a method for re-using relevant information over multiple calls to these algorithms and;
(3) methods exploiting domain-specific information to speed-up explanation sequence generation.
We experimentally validate our algorithms on a large number of CSP instances.
These experiments show that our algorithms outperform the MUS approach both in terms of quality of the explanation and in time to generate it.
We show that this approach can be used to effectively find sequences of optimal explanation steps for a large set of constraint satisfaction problems.
ABSTRACT. Dominance relations describe the relations among solutions of Constraint Optimization Problems (COPs), where some solutions are known to be subordinate compared with others concerning satisfiability of constraints and/or optimality of the objective. Dominance breaking is an effective technique to speed up Branch-and-Bound search for optimal solutions by exploiting dominance relations to prune away search space containing only dominated solutions. In this paper, we review recent progress of dominance breaking, and introduce a new line of work on automatic dominance breaking, a framework to automate the identification and exploitation of dominance relations in COPs thereby generating effective dominance breaking nogoods. The major idea is to generate (a) only nogoods as the building blocks to model arbitrary dominance breaking constraints and (b) only the nogoods that are efficiently and conveniently identifiable by constraint satisfaction. We also discuss possible directions of future work.
12:30-14:00Lunch Break
Lunches will be held in Taub hall and in The Grand Water Research Institute.
A Constraint-Based Tool for Generating Benchmark Instances
ABSTRACT. Benchmarking is fundamental for assessing the relative performance of alternative solving approaches. For an informative benchmark we often need a sufficient quantity of instances with different levels of difficulty and the ability to explore subsets of the instance space to detect performance discrimination among solvers. In this talk, I will present AutoIG, an automated tool for generating benchmark instances for constraint solvers. AutoIG supports generating two types of instances: graded instances (i.e., solvable at a certain difficult level by a solver), and discriminating instances (i.e., favouring a solver over another). The usefulness of the tool in benchmarking is demonstrated via an application on five problems taken from the MiniZinc Challenges. Our experiments show that the large number of instances found by AutoIG can provide more detailed insights into the performance of the solvers rather than just a ranking. Cases where a solver is weak or even faulty can be detected, providing valuable information to solver developers. Moreover, discriminating instances can reveal parts of the instance space where a generally weak solver actually performs well relative to others, and therefore could be useful as part of an algorithm portfolio.