previous day
next day
all days

View: session overviewtalk overviewside by side with other conferences

08:30-09:00Coffee & Refreshments
09:00-10:30 Session 44A: ML
Location: Taub 7
Learning MAX-SAT Models from Examples using Genetic Algorithms and Knowledge Compilation
PRESENTER: Senne Berden

ABSTRACT. Many real-world problems can be effectively solved by means of combinatorial optimization. However, appropriate models to give to a solver are not always available, and sometimes must be learned from historical data. Although some research has been done in this area, the task of learning (weighted partial) MAX-SAT models has not received much attention thus far, even though such models can be used in many real-world applications. Furthermore, most existing work is limited to learning models from non-contextual data, where instances are labeled as solutions and non-solutions, but without any specification of the contexts in which those labels apply. A recent approach named HASSLE-SLS has addressed these limitations: it can jointly learn hard constraints and weighted soft constraints from labeled contextual examples. However, it is hindered by long runtimes, as evaluating even a single candidate MAX-SAT model requires solving as many models as there are contexts in the training data, which quickly becomes highly expensive when the size of the model increases. In this work, we address these runtime issues. To this end, we make two contributions. First, we propose a faster model evaluation procedure that makes use of knowledge compilation. Second, we propose a genetic algorithm named HASSLE-GEN that decreases the number of evaluations needed to find good models. We experimentally show that both contributions improve on the state of the art by speeding up learning, which in turn allows higher-quality MAX-SAT models to be found within a given learning time budget.

Selecting SAT Encodings for Pseudo-Boolean and Linear Integer Constraints

ABSTRACT. Many constraint satisfaction and optimisation problems can be solved effectively by encoding them as instances of the Boolean Satisfiability problem (SAT). However, even the simplest types of constraints have many encodings in the literature with widely varying performance, and the problem of selecting suitable encodings for a given problem instance is not trivial. We explore the problem of selecting encodings for pseudo-Boolean and linear constraints using a supervised machine learning approach. We show that it is possible to select encodings effectively using a standard set of features for constraint problems; however we obtain better performance with a new set of features specifically designed for the pseudo-Boolean and linear constraints. In fact, we achieve good results when selecting encodings for unseen problem classes. Our results compare favourably to AutoFolio when using the same feature set. We discuss the relative importance of instance features to the task of selecting the best encodings, and compare several variations of the machine learning method.

Improved Sample Complexity Bounds for Branch-and-Cut
PRESENTER: Siddharth Prasad

ABSTRACT. The branch-and-cut algorithm for integer programming has a wide variety of tunable parameters that have a huge impact on its performance, but which are challenging to tune by hand. An increasingly popular approach is to use machine learning to configure these parameters based on a training set of integer programs from the application domain. We bound how large the training set should be to ensure that for any configuration, its average performance over the training set is close to its expected future performance. Our guarantees apply to parameters that control the most important aspects of branch-and-cut: node selection, branching constraint selection, and cut selection, and are sharper and more general than those from prior research.

09:00-10:30 Session 44B: Theory
Location: Taub 4
CSP Beyond Tractable Constraint Languages
PRESENTER: Stefan Szeider

ABSTRACT. The constraint satisfaction problem (CSP) is among the most studied computational problems. While NP-hard, many tractable subproblems have been identified (Bulatov 2017, Zuk 2017). Backdoors, introduced by Williams Gomes and Selman (2003), gradually extend such a tractable class to all CSP instances of bounded distance to the class. Backdoor size provides a natural but rather crude distance measure between a CSP instance and a tractable class. Backdoor depth, introduced by Mählmann, Siebertz, and Vigny (2021) for SAT, is a more refined distance measure, which admits the parallel utilization of different backdoor variables. Bounded backdoor size implies bounded backdoor depth, but there are instances of constant backdoor depth and arbitrarily large backdoor size. Dreier, Ordyniak, and Szeider (2022) provided fixed-parameter algorithms for finding backdoors of small depth into Horn and 2CNF.

In this paper, we consider backdoor depth for CSP. We consider backdoors w.r.t. tractable subproblems $C_\Gamma$ of the CSP defined by a constraint language $\Gamma$, i.e., where all the constraints use relations from the language $\Gamma$. Building upon Dreier et al.’s game-theoretic approach and their notion of separator obstructions, we show that for any finite, tractable, semi-conservative constraint language $\Gamma$, the CSP is fixed-parameter tractable parameterized by the backdoor depth into $\Gamma$ plus the domain size.

With backdoors of low depth, we reach classes of instances that require backdoors of arbitrary large size. Hence, our results strictly generalize several known results for CSP that are based on backdoor size.

On Redundancy in Constraint Satisfaction Problems

ABSTRACT. A constraint language G has non-redundancy f(n) if every instance of CSP(G) with n variables contains at most f(n) non-redundant constraints. If G has maximum arity r then it has non-redundancy O(n^r), but there are notable examples for which this upper bound is far from the best possible. In general, the non-redundancy of constraint languages is poorly understood and little is known beyond the trivial bounds Omega(n) and O(n^r).

In this paper, we introduce an elementary algebraic framework dedicated to the analysis of the non-redundancy of constraint languages. This framework relates redundancy-preserving reductions between constraint languages to closure operators known as pattern partial polymorphisms, which can be interpreted as generic mechanisms to generate redundant constraints in CSP instances. We illustrate the power of this framework by deriving a simple characterisation of languages of arity r having non-redundancy Theta(n^r).

Fixed-Template Promise Model Checking Problems
PRESENTER: Kristina Asimi

ABSTRACT. The fixed-template constraint satisfaction problem (CSP) can be seen as the problem of deciding whether a given primitive positive first-order sentence is true in a fixed structure (also called model). We study a class of problems that generalizes the CSP simultaneously in two directions: we fix a set L of quantifiers and Boolean connectives, and we specify two versions of each constraint, one strong and one weak. Given a sentence which only uses symbols from L, the task is to distinguish whether the sentence is true in the strong sense, or it is false even in the weak sense. We classify the computational complexity of these problems for the existential positive equality-free fragment of the first-order logic, i.e., L={∃,∧,∨}, and we prove some upper and lower bounds for the positive equality-free fragment, L={∃,∀,∧,∨}. The partial results are sufficient, e.g., for all extensions of the latter fragment.

10:30-11:00Coffee Break
11:10-12:10 Session 48: Keynote
Information Structures for Privacy and Fairness

ABSTRACT. The increasingly pervasive use of big data and machine learning is raising various ethical issues, in particular privacy and fairness.In this talk, I will discuss some frameworks to understand and mitigate the issues, focusing on iterative methods coming from information theory and statistics.In the area of privacy protection, differential privacy (DP) and its variants are the most successful approaches to date. One of the fundamental issues of DP is how to reconcile the loss of information that it implies with the need to pr eserve the utility of the data. In this regard, a useful tool to recover utility is the Iterative Bayesian Update (IBU), an instance of the famous Expectation-Maximization method from Statistics. I will show that the IBU, combined with the metric version of DP, outperforms the state-of-the art, which is based on algebraic methods combined with the Randomized Response mechanism, widely adopted by the Big Tech industry (Google, Apple, Amazon, ...). Furthermore I will discuss a surprising duality between the IBU and one of the methods used to enhance metric DP, that is the Blahut-Arimoto algorithm from Rate-Distortion Theory. Finally, I will discuss the issue of biased decisions in machine learning, and will show that the IBU can be applied also in this domain to ensure a fairer treatment of disadvantaged groups.


Brief Bio:Catuscia Palamidessi is Director of Research at INRIA Saclay (since 2002), where she leads the team COMETE. She has been Full Professor at the University of Genova, Italy (1994-1997) and Penn State University, USA (1998-2002). Palamidessi's research interests include Privacy, Machine Learning, Fairness, Secure Information Flow, Formal Methods, and Concurrency. In 2019 she has obtained an ERC advanced grant to conduct research on Privacy and Machine Learning. She has been PC chair of various conferences including LICS and ICALP, and PC member of more than 120 international conferences. She is in the Editorial board of several journals, including the IEEE Transactions in Dependable and Secure Computing, Mathematical Structures in Computer Science, Theoretics, the Journal of Logical and Algebraic Methods in Programming and Acta Informatica. She is serving in the Executive Committee of ACM SIGLOG, CONCUR, and CSL.


12:30-14:00Lunch Break

Lunch will be held in Taub lobby (CP, LICS, ICLP) and in The Grand Water Research Institute (KR, FSCD, SAT).

14:00-15:30 Session 50A: Tutorial
Location: Taub 7
Solving with Provably Correct Results: Beyond Satisfiability, and Towards Constraint Programming
PRESENTER: Ciaran McCreesh

ABSTRACT. Nowadays, Boolean Satisfiability (SAT) solvers aren't allowed to just assert that an instance is unsatisfiable: they're expected to also produce an independently verifiable proof that backs up this claim. This so-called proof logging has done wonders for solver reliability, and has also helped with social acceptability of computer-generated mathematical results. What if we could do the same for constraint programming, producing an auditable solving process where people whose lives or livelihoods are affected by a computer's decision would no longer have to resort to hoping that the system is free of bugs?

We will start this tutorial by explaining what a proof really is, and what it means for an algorithm to certify the correctness of its answers by using proof logging. We will give a brief overview of the (extended) resolution and DRAT proof systems used in SAT proof logging. Then we will look at what is needed to bring proof logging to a broader range of solving algorithms, starting with some subgraph-finding algorithms, and moving towards a full CP solver with multiple global constraints and even some reformulation. We will show, by example, what you need to do to use this technology in your own algorithms, and how this relates to the underlying proof methods. We'll finish by discussing how proof logging can also support advanced solving techniques such as symmetry breaking. Surprisingly, all of this can be done in a simple proof format known as "cutting planes with strengthening" that does not know anything about non-binary variables, global constraints, graphs, groups, or symmetries.


14:00-15:30 Session 50B: Applications
Location: Taub 4
Trajectory Optimization for Safe Navigation in Maritime Traffic Using Historical Data

ABSTRACT. Increasing maritime trade often results in congestion in busy ports, thereby necessitating planning methods to avoid close quarter situations among vessels for safer navigation. Rapid digitization and automation of port operations and vessel navigation provide unique opportunities for significantly improving the navigation safety. Our key contributions are as follows. \textit{First}, given a set of future candidate trajectories for vessels involved in a traffic hotspot zone, we develop a multiagent trajectory optimization method to choose trajectories that result in the best overall close quarter risk reduction. Our novel MILP-based trajectory optimization method is more than an order-of-magnitude faster than a standard MILP for this problem, and runs in near real-time. \textit{Second}, although automation has improved in maritime traffic, current vessel traffic systems (in our case study of a busy Asian port) predict only a \textit{single} future trajectory of a vessel based on linear extrapolation. Therefore, using historical data, we learn a \textit{generative model} that predicts \textit{multiple} possible future trajectories of each vessel in a given traffic hotspot area, reflecting past movement patterns of vessels, and integrate it with our trajectory optimizer. \textit{Third}, we validate both our trajectory optimization and generative model extensively using a real world maritime traffic dataset containing 1.7 million trajectories over 1.5 years from one of the busiest ports in the world demonstrating effective risk reduction.

A Constraint Programming Approach to Ship Refit Project Scheduling

ABSTRACT. Ship refit projects require ongoing plan management to adapt to arising work and disruptions. Planners must sequence work activities in the best order possible to complete the project in the shortest time or within a defined period while minimizing overtime costs. Activity scheduling must consider milestones, resource availability constraints, and precedence relations. We propose a constraint programming approach for detailed ship refit planning at two granularity levels, daily and hourly schedule. The problem was modeled using the cumulative global constraint, and the Solution-Based Phase Saving heuristic was used to speedup the search, thus achieving the industrialization goals. Based on the evaluation of seven realistic instances over three objectives, the heuristic strategy proved to be significantly faster to find better solutions than using a baseline search strategy. The method was integrated into Refit Optimizer, a cloud-based prototype solution that can import projects from Primavera P6 and optimize plans.

DUELMIPs: Jointly Optimizing Software Defined Network Functionality and Security
PRESENTER: Timothy Curry

ABSTRACT. Software defined networks (SDNs) define a programmable network fabric that can be reconfigured to respect global networks properties. Securing against adversaries who try to exploit the network is an objective that conflicts with providing functionality. This paper proposes a two-stage mixed-integer programming framework. The first stage automates routing decisions for the flows to be carried by the network while maximizing readability and ease of use for network engineers. The second stage is meant to quickly respond to security breaches to automatically decide on network counter-measures to block the detected adversary. Both stages are computationally challenging and the security stage leverages large neighborhood search to quickly deliver effective response strategies. The approach is evaluated on synthetic networks of various sizes and shown to be effective for both its functional and security objectives.

15:30-16:00Coffee Break
16:00-17:30 Session 54A: OR
Location: Taub 7
Parallel Hybrid Best-First Search
PRESENTER: Simon de Givry

ABSTRACT. While processor frequency stagnates for two decades, the number of available cores in servers or clusters is still growing, offering the opportunity for significant speed-up in combinatorial optimization. Parallelization of exact methods remains a difficult challenge. We revisit the concept of parallel branch-and-bound in the framework of weighted constraint satisfaction problems. We show how to adapt an anytime hybrid best-first search algorithm in a master/worker protocol. The resulting parallel algorithm achieves a good load-balancing without introducing new parameters to be tuned as in the embarrassingly parallel search (EPS) approach. It has also a small overhead due to its light communication messages. We performed an experimental evaluation on several benchmarks, comparing our parallel algorithm to its sequential version. We observed linear speed-up in some cases. Our approach compared favourably to the EPS approach and also a state-of-the-art parallel exact integer programming solver.

Sequence Variables for Routing Problems

ABSTRACT. Constraint Programming (CP) is one of the most flexible approaches for modeling and solving vehicle routing problems (VRP). This paper proposes the sequence variable domain, that is inspired by the insertion graph and the subset bound domain for set variables. This domain representation, which targets VRP applications, allows for an efficient insertion-based search on a partial tour and the implementation of simple, yet efficient filtering algorithms for constraints that enforce time-windows on the visits and capacities on the vehicles. Experiment results demonstrate the efficiency and flexibility of this CP domain for solving some hard VRP problems, including the Dial A Ride, the Patient Transportation, and the asymmetric TSP with time windows.

From Cliques to Colorings and Back Again

ABSTRACT. We present an exact algorithm for graph coloring and maximum clique problems based on SAT technology. It relies on four sub-algorithms that alternatingly compute cliques of larger size and colorings with fewer colors. We show how these techniques can mutually help each other: larger cliques facilitate finding smaller colorings, which in turn can boost finding larger cliques. We evaluate our approach on the DIMACS graph coloring suite, and first show that our algorithm can improve the state-of-the-art MaxSAT-based solver IncMaxCLQ for finding maximum cliques. For the graph coloring problem, we close two open instances, decrease two upper bounds, and increase one lower bound.

16:00-17:30 Session 54B: Tutorial + ML

16:00-17:00: Tutorial

17:00-17:30: Machine Learning

Location: Taub 4
Building a Constraint Programming Solver Guided by Machine Learning: Opportunities and Challenges

ABSTRACT. In the last years, deep reinforcement learning (DRL) has shown its promise for designing good heuristics dedicated to solve NP-hard combinatorial optimization problems. Many recent papers proposed to use directly a learned heuristic to solve the problem in an end-to-end fashion. However, such an approach has two shortcomings: (1) it only provides an approximate solution with no systematic ways to improve it nor to prove optimality, (2) it is often limited to standard academic problems (e.g., the travelling salesman problem) and has difficulties to handle complex combinatorial constraints that are present in realistic problems.

Constraint programming (CP), as we known, is a generic tool to solve combinatorial optimization problems. Given enough time, its complete search procedure will always find the optimal solution. However, a critical design choice, that makes CP non-trivial to use in practice, is the branching strategy that controls how the space is explored. In this tutorial, we will show how this strategy can be delegated to a DRL agent.

First, we will present an initial proof of concept, showing that a CP solver can successfully leverage a learned heuristic. However, the implementation of this prototype suffers from severe inefficiencies issues regarding the execution time. The main reason is that existing CP solvers were not designed to handle learning components in its core engine. This brings the motivation to build a new CP solver, with a direct support for learning heuristics.

We will then present SeaPearl, a new open-source CP solver in Julia that has been designed on this purpose, together with the design choices we made. We will also highlight the different challenges that we are currently encountering and preliminary results.

Combining Reinforcement Learning and Constraint Programming for Sequence-Generation Tasks with Hard Constraints
PRESENTER: Daphné Lafleur

ABSTRACT. While Machine Learning (ML) techniques are good at generating data similar to a dataset, they lack the capacity to enforce constraints. On the other hand, any solution to a Constraint Programming (CP) model satisfies its constraints but has no obligation to imitate a dataset. Yet, we sometimes need both. In this paper we borrow RL-Tuner, a Reinforcement Learning (RL) algorithm introduced to tune neural networks, as our enabling architecture to exploit the respective strengths of ML and CP. RL-Tuner maximizes the sum of a pretrained network's learned probabilities and of manually-tuned penalties for each violated constraint. We replace the latter with outputs of a CP model representing the marginal probabilities of each note and the number of constraint violations. As was the case for the original RL-Tuner, we apply our algorithm to music generation since it is a highly-constrained domain for which CP is especially suited. We show that combining ML and CP, as opposed to using them individually, allows the agent to reflect the pretrained network while taking into account constraints, leading to melodic lines that respect both the corpus' style and the music theory constraints.

17:30-18:30 Session 55: Logic Lounge
Thinking Fast and Slow in AI

ABSTRACT. Current AI systems lack several important human capabilities, such as adaptability, generalizability, self-control, consistency, common sense, and causal reasoning. We believe that existing cognitive theories of human decision making, such as the thinking fast and slow theory, can provide insights on how to advance AI systems towards some of these capabilities. In this talk, I will present the work done by IBM and collaborators in this space, including the definition of a general architecture that is based on fast/slow solvers and a metacognitive component. I will then present experimental results on the behavior of an instance of this architecture, for AI systems that make decisions about navigating in a constrained environment. The results will show how combining the fast and slow decision modalities allows the system to evolve over time and gradually pass from slow to fast thinking with enough experience, and that this greatly helps in decision quality, resource consumption, and efficiency.


Francesca Rossi is an IBM Fellow and the IBM AI Ethics Global Leader. She is a computer scientist with over 30 years of experience in AI research. Before joining IBM, she has been a professor of computer science at the University of Padova, Italy, for 20 years. Her research interests focus on artificial intelligence, specifically they include constraint reasoning, preferences, multi-agent systems, computational social choice, and collective decision making. She is also interested in ethical issues in the development and behavior of AI systems, in particular for decision support systems for group decision making. She is a fellow of both AAAI and of EurAI and she has been president of IJCAI and the Editor in Chief of the Journal of AI Research. She will be the next president of AAAI. She co-leads the IBM AI ethics board and she actively participate in many global multi-stakeholder initiatives on AI ethics. She is a member of the board of directors of the Partnership on AI and the industry representative in the steering committee of the Global Partnership on AI. She is a fellow of both the worldwide association of AI (AAAI) and of the European one (EurAI), and she will be the next president of AAAI from July 2022.