CP 2023: 29TH INTERNATIONAL CONFERENCE ON PRINCIPLES AND PRACTICE OF CONSTRAINT PROGRAMMING
PROGRAM FOR TUESDAY, AUGUST 29TH
Days:
previous day
next day
all days

View: session overviewtalk overview

09:00-10:00 Session 9

Invited Talk 2Great Hall (#1022) (Chair: Christophe Lecoutre)

  • Thomas Schiex (Universite Fédérale de Toulouse, ANITI, INRAE), Coupling CP with Deep Learning for Molecular Design and SARS-CoV2 variants exploration
10:00-10:30Coffee Break

Great Hall Lower Gallery (#1022K)

10:30-11:20 Session 10A

Modelling 2, Great Hall (#1022) (Chair: Steve Prestwich)

10:30
Fast Matrix Multiplication Without Tears: A Constraint Programming Approach

ABSTRACT. It is known that the multiplication of an N by M matrix with an M by P matrix can be performed using fewer multiplications than what the naive NMP approach suggests. The most famous instance of this is Strassen's algorithm for multiplying two 2 by 2 matrices in 7 instead of 8 multiplications. This gives rise to the constraint satisfaction problem of fast matrix multiplication, where a set of R < NMP multiplication terms must be chosen and combined such that they satisfy correctness constraints on the output matrix. Despite its highly combinatorial nature, this problem has not been exhaustively examined from that perspective, as evidenced for example by the recent deep reinforcement learning approach of AlphaTensor. In this work, we propose a simple yet novel Constraint Programming approach to find algorithms for fast matrix multiplication or provide proof of infeasibility otherwise. We propose a set of symmetry-breaking constraints and valid inequalities that are particularly helpful in proving infeasibility. On the feasible side, we find that exploiting solver performance variability in conjunction with a sparsity-based problem decomposition enables finding solutions for larger (feasible) instances of fast matrix multiplication. Our experimental results using CP Optimizer demonstrate that we can find fast matrix multiplication algorithms for matrices up to 3 by 3 in a short amount of time. Proving infeasibility is more difficult but benefits tremendously from symmetry breaking.

11:00
Constraint programming models for depth-optimal qubit assignment and SWAP-based routing

ABSTRACT. Due to the limited connectivity of gate model quantum devices, logical quantum circuits must be compiled to target hardware before they can be executed. Often, this process involves the insertion of SWAP gates into the logical circuit, usually increasing the depth of the circuit, achieved by solving a so-called qubit assignment and routing problem. Recently, a number of integer linear programming (ILP) models have been proposed for solving the qubit assignment and routing problem to proven optimality. These models encode the objective function and constraints of the problem, and leverage the use of automated solver technology to find hardware-compliant quantum circuits. In this work, we propose constraint programming (CP) models for this problem and compare their performance against ILP for circuit depth minimization for both linear and two-dimensional grid lattice device topologies on a set of randomly generated instances. Our empirical analysis indicates that the proposed CP approaches outperform the ILP models both in terms of solution quality and runtime.

10:30-11:30 Session 10B

Multi-Objective Optimization, East Common Room (#1034) (Chair: Javier Larrosa)

10:30
Assembly Line Preliminary Design Optimization for an Aircraft Airframe

ABSTRACT. In the aeronautics industry, each aircraft family has a dedicated manufacturing system. This system is classically designed once the aircraft design is completely finished, which might lead to poor performance. To mitigate this issue, a strategy is to take into account the production system as early as possible in the aircraft design process. In this work, we define the Assembly Line Preliminary Design Problem, which consists in defining, for a given aircraft design, the best assembly line layout and the type and number of machines equipping each workstation. We propose a Constraint Programming encoding for that problem, along with an algorithm based on epsilon constraint for exploring the set of Pareto solutions. We present experiments run on a set of real industrial data. The results show that the approach is promising and offers support to experts in order to compare aircraft designs with each other.

11:00
Preprocessing in SAT-Based Multi-Objective Combinatorial Optimization
PRESENTER: Christoph Jabs

ABSTRACT. Building on Boolean satisfiability (SAT) and maximum satisfiability (MaxSAT) solving algorithms, several approaches to computing Pareto-optimal MaxSAT solutions under multiple objectives have been recently proposed. However, preprocessing in (Max)SAT-based multi-objective optimization remains so-far unexplored. Generalizing clause redundancy to the multi-objective setting, we establish provably-correct liftings of MaxSAT preprocessing techniques for multi-objective MaxSAT in terms of computing Pareto-optimal solutions. We also establish preservation of Pareto-MCSes---the multi-objective lifting of minimal corrections sets tightly connected to optimal MaxSAT solutions---as a distinguishing feature between different redundancy notions in the multi-objective setting. Furthermore, we provide a first empirical evaluation of the impact of preprocessing on instance sizes and multi-objective MaxSAT solvers.

11:30-12:20 Session 11A

Applications 2, Great Hall (#1022) (Chair: Michael Trick)

11:30
Constraint Model for the Satellite Image Mosaic Selection Problem

ABSTRACT. Satellite imagery solutions are widely used to study and monitor different regions of the Earth. However, a single satellite image can cover only a limited area. In cases where a larger area of interest is studied, several images must be stitched together to create a single larger image, called a mosaic, that can cover the area. Today, with the increasing number of satellite images available for commercial use, selecting the images to build the mosaic is challenging, especially when the user wants to optimize one or more parameters, such as the total cost and the cloud coverage percentage in the mosaic. More precisely, for this problem the input is an area of interest, several satellite images intersecting the area, a list of requirements relative to the image and the mosaic, such as cloud coverage percentage, image resolution, and a list of objectives to optimize. We contribute to the constraint formulation of this new problem, which we call the satellite image mosaic combination problem, which is a multi-objective extension of the polygon cover problem. We propose a dataset of realistic and challenging instances, where the images were captured by the satellite constellations SPOT, Pléiades and Pléiades Neo. We evaluate our model and show that it is efficient for large instances, up to 200 images.

11:50
Optimization of Short-Term Underground Mine Planning using Constraint Programming
PRESENTER: Younes Aalian

ABSTRACT. Short-term underground mine planning problems are often difficult to solve due to the large number of activities and diverse machine types to be scheduled, as well as multiple operational constraints. This paper presents a Constraint Programming (CP) model to optimize short-term scheduling for an underground gold mine in Nunavut, Canada, taking into consideration operational constraints and the daily development and production targets of the mine plan. To evaluate the efficacy of the developed CP short-term planning model, we compare schedules generated by the CP model with the ones created manually by the mine planner for two real data sets. Results demonstrate that the CP model outperforms the manual approach by generating more efficient schedules with lower makespans.

11:30-12:20 Session 11B

Counting, East Common Room (#1034) (Chair: Kuldeep Meel)

11:30
Probabilistic Inference by Projected Weighted Model Counting on Horn Clauses

ABSTRACT. Weighted model counting, that is, counting the weighted number of satisfying assignments of a propositional formula, is an important tool in probabilistic reasoning. Recently, the use of projected weighted model counting (PWMC) has been proposed as an approach to formulate and answer probabilistic queries. In this work, we propose a new simplified modeling language based on PWMC in which probabilistic inference tasks are modeled using a conjunction of Horn clauses and a particular weighting scheme for the variables. We show that the major problems of inference for Bayesian Networks, network reachability and probabilistic logic programming can be modeled in this language. Subsequently, we propose a new, relatively simple solver that is specifically optimized to solve the PWMC problem for such formulas. Our experiments show that our new solver is competitive with state-of-the-art solvers on the major problems studied.

12:00
Enumerative Level-2 Solution Counting for Quantified Boolean Formulas

ABSTRACT. We lift the problem of enumeratively counting solutions to quantified Boolean formulas (QBFs) at the second level. In contrast to the well-explored model counting problem for SAT (#SAT), where models are simply assignments to the Boolean variables of a formula, we are now dealing with tree (counter-)models reflecting the dependencies between the variables of the first and the second quantifier block. It turns out that enumerative counting on the second level only counts disjoint models. We present the—to the best of our knowledge—first approach of counting tree (counter-)models together with a counting tool that exploits state-of-the-art QBF technology. We provide several kinds of benchmarks for testing our implementation and illustrate in several case studies that solution counting provides valuable insights into QBF encodings.

12:20-13:50Lunch Break

Great Hall Lower Gallery (#1022K)

13:50-14:50 Session 12A

Tutorial 2: Great Hall (#1022) (Chair: Maria Garcia de la Banda)

13:50-14:50 Session 12B

Solvers, East Common Room (#1034) (Chair: Laurent Perron)

13:50
Improving Conflict Analysis in MIP Solvers by Pseudo-Boolean Reasoning

ABSTRACT. Conflict analysis has been successfully generalized from Boolean satisfiability (SAT) solving to mixed integer programming (MIP) solvers, but although MIP solvers operate with general linear inequalities, the conflict analysis in MIP has been limited to reasoning with the more restricted class of clausal constraint. This is in contrast to how conflict analysis is performed in so-called pseudo-Boolean solving, where solvers can reason directly with 0-1 integer linear inequalities rather than with clausal constraints extracted from such inequalities. In this work, we investigate how pseudo-Boolean conflict analysis can be integrated in MIP solving, focusing on 0-1 integer linear programs (0-1 ILPs). Phrased in MIP terminology, conflict analysis can be understood as a sequence of linear combinations and cuts. We leverage this perspective to design a new conflict analysis algorithm based on mixed integer rounding (MIR) cuts, which theoretically dominates the state-of-the-art division-based method in pseudo-Boolean solving. We also report results from a first proof-of-concept implementation of different pseudo-Boolean conflict analysis methods in the open-source MIP solver SCIP. When evaluated on a large and diverse set of 0-1 ILP instances from MIPLIB 2017, our new MIR-based conflict analysis outperforms both previous pseudo-Boolean methods and the clause-based method used in MIP. Our conclusion is that pseudo-Boolean conflict analysis in MIP is a promising research direction that merits further study, and that it might also make sense to investigate the use of such conflict analysis to generate stronger no-goods in constraint programming.

14:20
Exploiting Configurations of MaxSAT solvers

ABSTRACT. In this paper, we describe how we can effectively exploit alternative parameter configurations to a MaxSAT solver. We describe how these configurations can be computed in the context of MaxSAT. In particular, we experimentally show how to easily combine configurations of a non-competitive solver to obtain a better solving approach.

14:50-15:30 Session 13A

Modelling 3, Great Hall (#1022) (Chair: Michael Trick)

14:50
A New Approach to Generating 2 x n Partially Spatially Balanced Latin Rectangles

ABSTRACT. Partially spatially balanced Latin rectangles are combinatorial structures that are important for experimental design. However, it is computationally challenging to find even small optimally balanced rectangles, where previous work has not been able to prove optimality for any rectangle with a dimension above size 11. Here we introduce a graph-based encoding for the 2 x n case based on finding the minimum-cost clique of size n. This encoding inspires a new mixed-integer programming (MIP) formulation, which finds exact solutions for the 2 x 12 and 2 x 13 cases and provides improved bounds up to n = 20. Compared to three other methods, the new formulation establishes the best lower bound in all cases and establishes the best upper bound in five out of seven cases.

15:10
Proven optimally-balanced Latin rectangles with SAT
PRESENTER: Stefan Szeider

ABSTRACT. Motivated by applications from agronomic field experiments, Díaz, Le Bras, and Gomes [CPAIOR 2015] introduced Partially Balanced Latin Rectangles as a generalization of Spatially Balanced Latin Squares. They observed that the generation of Latin rectangles that are optimally balanced is a highly challenging computational problem. They computed, utilizing CSP and MIP encodings, Latin rectangles up to 12x12, some optimally balanced, some suboptimally balanced.

In this paper, we develop a SAT encoding for generating balanced Latin rectangles. We compare experimentally encoding variants. Our results indicate that SAT encodings perform competitively with the MIP encoding, in some cases better. This finding is significant, as there are many arithmetic constraints involved. The SAT approach offers the advantage that we can certify that Latin rectangles are optimally balanced through DRAT proofs that can be verified independently.

14:50-15:20 Session 13B

Boolean Networks, East Common Room (#1034) (Chair: Roland Yap)

14:50
Efficient enumeration of fixed points in complex Boolean networks using answer set programming

ABSTRACT. Boolean Networks (BNs) are an efficient modeling formalism with applications in various research fields such as mathematics, computer science, and more recently systems biology. One crucial problem in the BN research is to enumerate all fixed points, which has been proven crucial in the analysis and control of biological systems. Indeed, in that field, BNs originated from the pioneering work of R. Thomas on gene regulation and from the start were characterized by their asymptotic behavior: complex attractors and fixed points. The former being notably more difficult to compute exactly, and specific to certain biological systems, the computation of stable states (fixed points) has been the standard way to analyze those BNs for years. However, with the increase in model size and complexity of Boolean update functions, the existing methods for this problem show their limitations. To our knowledge, the most efficient state-of-the-art methods for the fixed point enumeration problem rely on Answer Set Programming (ASP). Motivated by these facts, in this work we propose two new efficient ASP-based methods to solve this problem. We evaluate them on both real-world and pseudo-random models, showing that they vastly outperform four state-of-the-art methods as well as can handle very large and complex models.

15:30-16:00Coffee Break

Great Hall Lower Gallery (#1022K)

16:00-16:50 Session 14A

Machine Learning 1, Great Hall (#1022) (Chair: Arnaud Lallouet)

16:00
Guided Bottom-Up Interactive Constraint Acquisition

ABSTRACT. Constraint Acquisition (CA) systems can be used to assist in the modeling of constraint satisfaction problems. In (inter)active CA, the system is given a set of candidate constraints and posts queries to the user with the goal of finding the right constraints among the candidates. Current interactive CA algorithms suffer from at least two major bottlenecks. First, in order to converge, they require a large number of queries to be asked to the user. Second, they cannot handle large sets of candidate constraints, since these lead to large waiting times for the user. For this reason, the user must have fairly precise knowledge about what constraints the system should consider. In this paper, we alleviate these bottlenecks by presenting two novel methods that improve the efficiency of CA. First, we introduce a bottom-up approach named GrowAcq, that reduces the maximum waiting time for the user and allows the system to handle much larger sets of candidate constraints. It also reduces the total number of queries for problems in which the target constraint network is not sparse. Second, we propose a probability-based method to guide query generation and show that it can significantly reduce the number of queries required to converge. We also propose a new technique that allows the use of openly accessible CP solvers in query generation, removing the dependency of existing methods on less well-maintained custom solvers that are not publicly available. Experimental results show that our proposed methods outperform state-of-the-art CA methods, reducing the number of queries by up to 60%. Our methods work well even in cases where the set of candidate constraints is 50 times larger than the ones commonly used in the literature.

16:30
Predict-then-Optimise Strategies for Water Flow Control

ABSTRACT. A pressure sewer system is a network of pump stations used to collect and manage sewage from individual properties that cannot be directly connected to the gravity driven sewer network due to the topography of the terrain. We consider a common scenario for a pressure sewer system, where individual sites collect sewage in a local tank, and then pump it into the gravity fed sewage network. Standard control systems simply wait until the local tank reaches (near) capacity and begin pumping out. Unfortunately such simple control usually leads to peaks in sewage flow in morning and evening, corresponding to peak water usages in the properties. High peak flows require either large buffer pools or overflow systems, or larger capacity sewage treatment plants. In this paper we investigate combining prediction and optimisation to better manage peak sewage flows. We use simple prediction methods to generate realistic possible future scenarios, and then develop optimisation models to generate pumping plans that try to smooth out flows into the network. The solutions of these models create a policy for pumping out specialized to individual properties, which overall is able to substantially reduce peak flows.

16:00-16:50 Session 14B

Applications 3, East Common Room (#1034) (Chair: Mats Carlson)

16:00
Optimization models for pickup and delivery problems with reconfigurable capacities

ABSTRACT. When a transportation service accommodates both people and goods, operators sometimes opt for vehicles that can be dynamically reconfigured for different demands. Motivated by air service in remote communities in Canada's north, we define a pickup-and-delivery problem in which aircraft can add or remove seats during a multi-stop trip to accommodate varying demands. Given the demand for people and cargo as well as a seat inventory at each location, the problem consists in finding a tour that picks up and delivers all demand while potentially reconfiguring the vehicle capacity at each location by adding or removing seats. We develop three different modeling approaches using constraint programming, mixed integer programming, and domain-independent dynamic programming. Our numerical experiments indicate that domain-independent dynamic programming is able to substantially outperform the other technologies on both solution quality and run-time on a set of randomly generated instances.

16:30
Constraint Programming to Improve Hub Utilization in Autonomous Transfer Hub Networks

ABSTRACT. The Autonomous Transfer Hub Network (ATHN) is one of the most promising ways to adapt self-driving trucks for the freight industry. These networks use autonomous trucks for the middle mile, while human drivers perform the first and last miles. This paper extends previous work on optimizing ATHN operations by including transfer hub capacities, which are crucial for labor planning and policy design. It presents a Constraint Programming (CP) model that shifts an initial schedule produced by a Mixed Integer Program to minimize the hub capacities. The scalability of the CP model is demonstrated on a case study at the scale of the United States, based on data provided by Ryder System, Inc. The CP model efficiently finds optimal solutions and lowers the necessary total hub capacity by 42%, saving $15.2M in annual labor costs. The results also show that the reduced capacity is close to a theoretical (optimistic) lower bound.

20:00-23:00

CP 2023 Senior Program Committee Dinner