CPAIOR2024: INTERNATIONAL CONFERENCE ON THE INTEGRATION OF CONSTRAINT PROGRAMMING, ARTIFICIAL INTELLIGENCE, AND OPERATIONS RESEARCH
PROGRAM FOR THURSDAY, MAY 30TH
Days:
previous day
next day
all days

View: session overviewtalk overview

09:00-10:40 Session 9: SAT
09:00
Single Constant Multiplication for SAT

ABSTRACT. This paper presents new methods of encoding the multiplication of a binary encoded integer variable with a constant value for Boolean satisfiability (SAT) solvers. This problem, known as the single constant multiplication (SCM) problem, is widely studied for its application in hardware design, but these technique are currently not employed for SAT encodings. Considering the smaller and variable bit sizes and the different cost of operations in SAT, we propose further improvements to existing methods by minimizing the number of full/half adders, rather than the number of ripple carry adders. We compare our methods to simple encodings into adders, currently employed by SAT encoders, and direct Boolean encoding using logic minimization tools. We show that our methods result in improved solve-time for problems involving integer linear constraints. A library of optimal recipes for each method to encode SCM for SAT is made available as part of this publication

09:20
Core Boosting in SAT-Based Multi-Objective Optimization

ABSTRACT. With noticeable algorithmic advances based on the iterative use of Boolean satisfiability (SAT) solvers, maximum satisfiability (MaxSAT) constitutes today a successful approach to solving various types of real-world optimization problems through propositional encodings. Building on this success, various approaches have been recently proposed for finding Pareto-optimal solutions to multi-objective MaxSAT (MO-MaxSAT) instances, i.e., propositional encodings under multiple objective functions. In this work, we propose core boosting as a reformulation/preprocessing technique for improving the runtime performance of MO-MaxSAT solvers. Extending core boosting earlier proposed for single-objective MaxSAT, core boosting in the multi-objective setting allows for shrinking the ranges of the multiple objectives at hand, which can be particularly beneficial for MO-MaxSAT relying on search that requires enforcing increasingly tighter objective bounds through propositional encodings. We show empirically that the technique is effective in improving the runtime performance of SAT-based MO-MaxSAT solvers with typically little overhead, much more so than liftings of SAT and MaxSAT preprocessing techniques earlier implemented for MO-MaxSAT.

09:40
UNSAT Solver Synthesis via Monte Carlo Forest Search

ABSTRACT. We introduce Monte Carlo Forest Search (MCFS), a class of reinforcement learning (RL) algorithms for learning policies in {tree MDPs}, for which policy execution involves traversing an exponential-sized tree. Examples of such problems include proving unsatisfiability of a SAT formula; counting the number of solutions of a satisfiable SAT formula; and finding the optimal solution to a mixed-integer program. MCFS algorithms can be seen as extensions of Monte Carlo Tree Search (MCTS) to cases where, rather than finding a good path (solution) within a tree, the problem is to find a small tree within a forest of candidate trees. We instantiate and evaluate our ideas in an algorithm that we dub Knuth Synthesis, an MCFS algorithm that learns DPLL branching policies for solving the Boolean satisfiability (SAT) problem, with the objective of achieving good average-case performance on a given distribution of unsatisfiable problem instances. Knuth Synthesis leverages two key ideas to avoid the prohibitive costs of policy evaluations in an exponentially-sized tree. First, we estimate tree size by randomly sampling paths and measuring their lengths, drawing on an unbiased approximation due to Knuth (1975). Second, we query a strong solver at a user-defined depth rather than learning a policy across the whole tree, to focus our policy search on early decisions that offer the greatest potential for reducing tree size. We matched or improved performance over a strong baseline on three well-known SAT distributions (\texttt{R3SAT}, \texttt{sgen}, \texttt{satfc}).

10:00
ViolationLS: Constraint-Based Local Search in CP-SAT

ABSTRACT. We introduce ViolationLS, a Constraint-Based Local Search (CBLS) solver for arbitrary MiniZinc models based on the Feasibility Jump Mixed-Integer Programming Heuristic. ViolationLS uses a novel approach for finding multi-variable moves without requiring any "implicit" or "one-way" constraints traditionally used by CBLS solvers. It significantly outperforms other state-of-the-art CBLS solvers on the MiniZinc challenge benchmarks, both with and without multi-variable moves. We demonstrate that integrating ViolationLS into the parallel portfolio framework of the state-of-the-art CP-SAT Lazy Clause Generation solver improves performance on more than half of the instances in the MiniZinc challenge benchmarks suite. To our knowledge this is the first instance of such an integration of CBLS and Lazy Clause Generation inside the same solver.

10:20
SMT-based Repair of Disjunctive Temporal Networks with Uncertainty: Strong and Weak Controllability

ABSTRACT. Temporal Networks with Uncertainty are a powerful and widely used formalism for representing and reasoning over temporal constraints in presence of uncertainty. Since their introduction, they have been used in planning and scheduling applications to model situations where some activity durations or event timings are not under the control of the scheduling agent. Moreover, a wide variety of classes of temporal networks (depending on the types of constraints) have been defined and many algorithms for dealing with these emerged.

We are interested in the repair problem, consisting in reconsidering the bounds of the uncertain durations when the network is not controllable, i.e. when no strategy for scheduling exists. This problem is important, as it allows for the explanation and negotiation of the problematic uncertaintainties. In this paper, we address the repair problem for a very expressive class of temporal networks, namely the Disjunctive Temporal Networks with Uncertainty. We use the Satisfiability Modulo Theory framework to formally encode and solve the problem and we devise a uniform solution encompassing different "levels" of controllability, namely strong and weak. Moreover, we provide specialized encodings for important sub-classes of the problem and we experimentally evaluate our approaches.

10:40-11:10Break with Refreshments
11:10-12:30 Session 10: Machine Learning + Multi-objective Optimization
11:10
Don't Explain Noise: Robust Counterfactuals for Randomized Ensembles

ABSTRACT. Counterfactual explanations describe how to modify a feature vector in order to flip the outcome of a trained classifier. Obtaining robust counterfactual explanations is essential to provide valid algorithmic recourse and meaningful explanations. We study the robustness of explanations of randomized ensembles, which are always subject to algorithmic uncertainty even when the training data is fixed. We formalize the generation of robust counterfactual explanations as a probabilistic problem and show the link between the robustness of ensemble models and the robustness of base learners. We develop a practical method with good empirical performance and support it with theoretical guarantees for ensembles of convex base learners. Our results show that existing methods give surprisingly low robustness: the validity of naive counterfactuals is below 50% on most data sets and can fall to 20% on problems with many features. In contrast, our method achieves high robustness with only a small increase in the distance from counterfactual explanations to their initial observations.

11:30
Accelerating Continuous Variable Coherent Ising Machines via Momentum

ABSTRACT. The Coherent Ising Machine (CIM) is a non-conventional architecture that takes inspiration from physical annealing processes to solve Ising problems heuristically. Its dynamics are naturally continuous and described by a set of ordinary differential equations that have been proven to be useful for the optimization of continuous variables non-convex quadratic optimization problems. The dynamics of such Continuous Variable CIMs (CV-CIM) encourage optimization via optical pulses whose amplitudes are determined by the negative gradient of the objective; however, standard gradient descent is known to be trapped by local minima and hampered by poor problem conditioning. In this work, we propose to modify the CV-CIM dynamics using more sophisticated pulse injections based on tried-and-true optimization techniques such as momentum and Adam. Through numerical experiments, we show that the momentum and Adam updates can significantly speed up the CV-CIM’s convergence and improve sample diversity over the original CV-CIM dynamics. We also find that the Adam-CV-CIM’s performance is more stable as a function of feedback strength, especially on poorly conditioned instances, resulting in an algorithm that is more robust, reliable, and easily tunable. More broadly, we identify the CIM dynamical framework as a fertile opportunity for exploring the intersection of classical optimization and modern analog computing.

11:50
Bi-Objective Discrete Graphical Model Optimization

ABSTRACT. Discrete Graphical Models (GMs) are widely used in Artificial Intelligence to describe complex systems through a joint function of interest. Probabilistic GMs such as Markov Random Fields (MRFs) define a joint non-normalized probability distribution while deterministic GMs such as Cost Function Networks (CFNs) define a joint cost function. A typical query on GMs consists in finding the joint state that optimizes this joint function, a problem denoted as the Maximum a Posteriori or Weighted Constraint Satisfaction Problem respectively.

In practice, more than one function of interest may need to be optimized at the same time. In this paper, we develop a two-phase scalarization method for solving bi-objective discrete graphical model optimization, with the aim of computing a set of non-dominated solutions, the Pareto frontier, representing different compromises between two GM-defined objectives. For this purpose, we introduce a dedicated higher-order constraint, which bounds the value of one GM-defined objective while minimizing another GM on the same variables. Discrete GM optimization is NP-hard, and its bi-objective variants are even harder. We show how existing GM global lower and upper bounds can be exploited to provide anytime bounds of the exact Pareto frontier. We benchmark it on various instances from Operations Research and Probabilistic Graphical Models.

12:10
Strategies for Compressing the Pareto Frontier: Application to Strategic Planning of Hydropower in the Amazon Basin

ABSTRACT. The development of ethical AI decision-making systems requires considering multiple criteria, often resulting in a large spectrum of partially ordered solutions. At the core of this challenge lies the Pareto frontier, the set of all feasible solutions where no solution is dominated by another. In previous work, we developed both exact and approximate algorithms for generating the Pareto frontier for tree-structured networks. However, as the number of criteria grows, the Pareto frontier increases exponentially, posing a significant challenge for decision-makers. To address this challenge, we propose various strategies to efficiently compress the Pareto frontier, including an approximation method with optimality and polynomial runtime guarantees. We provide detailed empirical results comparing the strategies' effectiveness in the context of strategic planning of the hydropower expansion in the Amazon basin. Our strategies offer a more manageable approach for navigating Pareto frontiers.

12:30-12:50 Session 11: Spotlight Talks for Posters II
12:30
Optimal counterfactual explanations for k-Nearest Neighbors using Mathematical Optimization and Constraint Programming

ABSTRACT. Within explainable AI, counterfactual explanations to classifiers have received significant recent attention. We study counterfactual explanations that try to explain why a data point received an undesirable classification by providing the closest data point that would have received a desirable one. Within the context of one the simplest and most popular classification models ---$k$-nearest neighbors ($k$-NN)--- the solution to such optimal counterfactual explanation is still very challenging computationally. This work presents techniques that significantly improve the computational times to find such optimal counterfactual explanations for $k$-NN.

12:35
Task Generalization in Decision-Focused Learning

ABSTRACT. Real-world optimization problems often contain uncertain parameters that must be predicted prior to solving. Such predict-then-optimize problems are best tackled by decision-focused learning (DFL), a recent paradigm that involves training the predictive model to minimize the downstream optimization task loss of interest on a single unchanging optimization problem. However, real-world problems are bound to change over time, and, as we show, models trained with DFL on one task often generalize poorly to changing tasks. To improve on this, we first explore how to train the model to generalize implicitly, by simply training it on different tasks sampled at training time. We then propose a more refined approach using explicit learned task representations. To this end, we represent optimization problems as bipartite variable-constraint graphs, and train graph neural networks (GNNs) to produce informative node embeddings that are added to the input to the predictive model. In our experiments, we start by showing that single-task DFL tends to overfit to the training task. We then show that both of our proposed approaches greatly improve on this, with the explicit task representations providing a considerable additional improvement over the implicit approach.

12:40
On Learning CP-SAT Resolution Outcomes Before Reaching Time-Limit

ABSTRACT. Lazy clause generation (LCG) solvers are a powerful tool for solving constraint programs. The variability in hardness among problem instances, and the randomness typically leveraged within solvers, lead to variability in whether a solver will conclude its solving process by a time limit. We aim to predict the remaining runtime of a solver, given the evolution of the solving process part-way through. To this end we exploit supervised learning, casting the prediction as a classification task. Experiments on MiniZinc benchmark instances show that during LCG solving, certain features of the solver are strongly predictive of solving time. For optimisation problems run on the LCG solver Chuffed, we can predict, even at 1% of a time limit, whether or not a problem instance will be solved within the time limit with an F1 score of 90%. The key features are the number of conflicts, unvisited nodes, and binary clauses found so far in the solving process. The effective prediction demonstrated allows strategic use of the remaining time prior to the time limit.

12:45
A Delta Debugger for Mixed-Integer Programming Solvers

ABSTRACT. Over the last decades, solvers for mixed-integer programming (MIP) have witnessed remarkable performance improvements, allowing them to solve even large-scale problems for many real-world applications despite the NP-hardness of the general class of MIPs. However, these performance gains come with increasingly complex algorithms, often posing challenges in identifying and fixing implementation errors. These errors may result in the solver mistakenly claiming unboundedness or infeasibility of the problem, or even returning "optimal solutions" that are indeed infeasible or suboptimal. In this work, we explore the use of delta debugging for MIP in order to systematically reduce the size of input problems that expose incorrect behavior. Delta debugging is an automated approach to isolating the cause of a software failure, driven by a hypothesis-trial-result loop. This is motivated by the successful application of this paradigm in the SAT community. We provide an open-source tool that can be used to facilitate and accelerate the debugging process of any exact or heuristic MIP solver. To the best of our knowledge, this is the first delta debugger for mixed-integer programming.

12:50-14:00Lunch + Posters II
14:00-15:00 Session 12: Keynote II
14:00
Keynote: Decision-Focused Learning: Foundations, State of the Art, Benchmarks and Future Opportunities

ABSTRACT. Abstract: Increasingly, combinatorial optimisation problems follow a predict + optimize paradigm, where part of the parameters (costs, volumes, capacities) are predicted from data, and those predictions are fed into a downstream combinatorial optimisation problem. How to best train these predictive models? Decision-focused learning (DFL) is an emerging paradigm in machine learning which trains a model to optimize decisions, integrating prediction and optimization in an end-to-end system. This paradigm holds the promise to revolutionize decision-making in many real-world applications which operate under uncertainty, where the estimation of unknown parameters within these decision models often becomes a substantial roadblock to high-quality solutions. This talk presents a comprehensive review of DFL. It provides an in-depth analysis of the various techniques devised to integrate machine learning and optimization models, introduces a taxonomy of DFL methods distinguished by their unique characteristics, and conducts an extensive empirical evaluation of these methods proposing suitable benchmark dataset and tasks for DFL. Finally, we'll provide valuable insights into current and potential future avenues in DFL research.

Bio: Tias Guns is Associate Professor at the DTAI lab of KU Leuven, in Belgium. His research is at the intersection of machine learning and combinatorial optimisation. Tias' expertise is in the hybridisation of machine learning systems with constraint solving systems, more specifically building constraint solving systems that reason both on explicit knowledge as well as knowledge learned from data. For example, learning the preferences of planners in vehicle routing, and solving new routing problems taking both operational constraints and learned human preferences into account; or building energy price predictors specifically for energy-aware scheduling, and planning maintenance crews based on expected failures. He was awarded a prestigious ERC Consolidator grant in 2021 to work on conversational human-aware technology for optimisation.

15:00-15:40 Session 13: Decision-focused Learning
15:00
CaVE: A Cone-aligned Approach for Fast Predict-then-optimize with Binary Linear Programs

ABSTRACT. The \textit{end-to-end predict-then-optimize} framework, also known as \textit{decision-focused learning}, has gained popularity for its ability to integrate optimization into the training procedure of machine learning models that predict the unknown cost (objective function) coefficients of optimization problems from contextual instance information. Naturally, most of the problems of interest in this space can be cast as integer linear programs. In this work, we focus on binary linear programs (BLPs) and propose a new end-to-end training method for predict-then-optimize. Our method, Cone-aligned Vector Estimation (\cave), aligns the predicted cost vectors with the cone corresponding to the \textit{true} optimal solution of a training instance. When the predicted cost vector lies inside the cone, the optimal solution to the linear relaxation of the binary problem is optimal w.r.t. to the true cost vector. Not only does this alignment produce decision-aware learning models, but it also dramatically reduces training time as it circumvents the need to solve BLPs to compute a loss function with its gradients. Experiments across multiple datasets show that our method exhibits a favorable trade-off between training time and solution quality, particularly with large-scale optimization problems such as vehicle routing, a hard BLP that has yet to benefit from predict-then-optimize methods in the literature due to its difficulty.

15:20
Decision-focused predictions via pessimistic bilevel optimization: a computational study

ABSTRACT. Dealing with uncertainty in optimization parameters is an important and longstanding challenge. Typically, uncertain parameters are predicted accurately, and then a deterministic optimization problem is solved. However, the decisions produced by this so-called predict-then-optimize procedure can be highly sensitive to uncertain parameters. In this work, we contribute to recent efforts in producing decision-focused predictions, i.e., to build predictive models that are constructed with the goal of minimizing a regret measure on the decisions taken with them.

We formulate the exact expected regret minimization as a pessimistic bilevel optimization model. Then, using duality arguments, we reformulate it as a non-convex quadratic optimization problem. Finally, we show various computational techniques to achieve tractability.

We report extensive computational results on shortest-path instances with uncertain cost vectors.

Our results indicate that our approach can improve training performance over the approach of Elmachtoub and Grigas (2022), a state-of-the-art method for decision-focused learning.

15:40-16:10Break with Refreshments
16:10-17:30 Session 14: Applications II
16:10
Minimizing the Cost of Leveraging Influencers in Social Networks: IP and CP Approaches

ABSTRACT. In this paper, we introduce and study mathematical programming formulations for the Least Cost Directed Perfect Awareness Problem (LDPAP), an NP-hard optimization problem that arises in the context of influence marketing. In the LDPAP, we seek to identify influential members of a given social network that can disseminate a piece of information and trigger its propagation throughout the network. The objective is to minimize the cost of recruiting the initial spreaders while ensuring that the information reaches everyone. This problem has been previously modeled as two different integer programming formulations that were tested on a collection of 300 small synthetic instances. In this work, we propose two new integer programming models and three constraint programming formulations for the LDPAP. We also present preprocessing techniques capable of significantly reducing the sizes of these models. To investigate and compare the efficiency and effectiveness of our approaches, we perform a series of experiments using the existing small instances and a new publicly available benchmark of 14 large instances. Our findings yield new optimal solutions to 185 small instances that were previously unsolved, tripling the total number of instances with known optima. Regarding both small and large instances, our contributions include a comprehensive analysis of the experimental results and an evaluation of the performance of each formulation in distinct scenarios, further advancing our understanding of the LDPAP toward the design of exact approaches for the problem.

16:30
An Exploration of Exact Methods for Effective Network Failure Detection and Diagnosis

ABSTRACT. In computer networks, swift recovery from failures requires prompt detection and diagnosis. Protocols such as Bidirectional Forwarding Detection (BFD) exists to probe the liveliness of a path and endpoint. These protocols are run on specific nodes that are designated as network monitors. Monitors are responsible for continuously verifying the viability of communication paths. It is important to carefully select monitors as monitoring incurs a cost, necessitating finding a balance between the number of monitor nodes and the monitoring quality. Here, we examine two monitoring challenges from the Boolean network tomography research field: coverage, which involves detecting failures, and 1-identifiability, which additionally requires identifying the failing link or node. We show that minimizing the number of monitors while meeting these requirements constitutes NP-hard problems. We present integer linear programming (ILP), constraint programming (CP) and MaxSAT formulations for these problems and compare their performance. Using 625 real network topologies, we demonstrate that employing such exact methods can reduce the number of monitors needed compared to the existing state-of-the-art greedy algorithm.

16:50
ULD Build-Up Scheduling with Logic-Based Benders Decomposition

ABSTRACT. We study a complex planning and scheduling problem arising from the build-up process of air cargo pallets and containers, collectively referred to as unit load devices (ULD), in which ULDs must be assigned to workstations for loading. Since air freight usually becomes available gradually along the planning horizon, ULD build-ups must be scheduled neither too early to avoid underutilizing ULD capacity, nor too late to avoid resource conflicts with other flights. Whenever possible, ULDs should be built up in batches, thereby giving ground handlers more freedom to rearrange cargo and utilize the ULD's capacity efficiently. The resulting scheduling problem has an intricate cost function and produces large time-expanded models, especially for longer planning horizons.

We propose a logic-based Benders decomposition approach that assigns batches to time intervals and workstations in the master problem, while the actual schedule is decided in a subproblem. By choosing appropriate intervals, the subproblem becomes a feasibility problem that decomposes over the workstations. Additionally, the similarity of many batches is exploited by a strengthening procedure for no-good cuts. We benchmark our approach against a time-expanded MIP formulation from the literature on a publicly available data set. It solves 15% more instances to optimality and decreases run times by more than 50% in the geometric mean. This improvement is especially pronounced for longer planning horizons of up to one week, where the Benders approach solves over 50% instances more than the baseline.

17:10
Acquiring Constraints for a Non-linear Transmission Maintenance Scheduling Problem

ABSTRACT. Over time, power network equipment can face defects and has to be maintained to ensure transmission network reliability. Once an equipment is planned to be withdrawn from the network, it becomes unavailable and can lead to power outages when other adjacent equipment fails. This problem is commonly referred to as transmission maintenance scheduling problem (TMS) and remains a challenge for power utilities. Numerous combinatorial constraints must be satisfied to ensure the stability and the reliability of the transmission network. While most of these constraints can be naturally formalized in constraint programming (CP), there are, however, some complex constraints like transit-power limits that are challenging to model because of their continuous and non-linear nature. This paper proposes a methodology based on active constraint acquisition to automatically formalize these constraints. The acquisition is carried out using a simulator developed by Hydro-Québec (HQ) power utility to compute the power-flow of their transmission network. The acquired constraints are then integrated into a CP model to solve the HQ network's TMS problem. Our experimental results show the relevance of the methodology to learn transit-power constraints in an automated way. It allows HQ to automatically schedule a maintenance plan for an instance that remained intractable until now. To our knowledge, it is the first time that active constraint acquisition is successfully used for the TMS problem.

19:00-22:30 Banquet at Norrlands Nation

Map: https://link.mazemap.com/dUeEazIj 

It is 900 meters from the Humanities Theatre (a walk of about 12 minutes).

Welcome drinks 19:00 - 19:30

Dinner served at 19:30

The bar opens after dinner ( until 1 am )