CPAIOR2024: INTERNATIONAL CONFERENCE ON THE INTEGRATION OF CONSTRAINT PROGRAMMING, ARTIFICIAL INTELLIGENCE, AND OPERATIONS RESEARCH
PROGRAM FOR FRIDAY, MAY 31ST
Days:
previous day
all days

View: session overviewtalk overview

09:00-10:30 Session 16: Decision Diagrams + MIP
09:00
Lookahead, Merge and Reduce for Compiling Relaxed Decision Diagrams for Optimization

ABSTRACT. In this paper, we propose a new approach for the top-down compilation of relaxed Binary Decision Diagrams (BDDs) for Discrete Optimization: Lookahead, Merge and Reduce. The approach is inspired by the bottom-up algorithm for reducing exact BDDs in which equivalent nodes, that is, nodes with the same partial completions, are merged. In our top-down compilation approach, we apply this reduction algorithm for determining which states to be merged by constructing a lookahead layer, merging the lookahead layer nodes according to some heuristic and then deeming nodes having the same feasible completions in the lookahead BDD as approximately equivalent. Moreover, under certain structural properties we prove an upper limit on the size of the reduced layers given the size of the merged lookahead layer. In a set of preliminary computational experiments, we evaluate our approach for the 0/1 Knapsack problem, showing that the approach often achieves much stronger bounds than the traditional top-down compilation scheme.

09:20
Modeling and Exploiting Dominance Rules for Discrete Optimization with Decision Diagrams

ABSTRACT. Discrete optimization with decision diagrams is a recent approach to solve combinatorial problems that can be formulated with dynamic programming. It consists in a branch-and-bound algorithm that iteratively explores the search space by compiling bounded-width decision diagrams. Those decision diagrams are used both to subdivide a given problem into smaller subproblems - in a divide-and-conquer fashion - and to compute primal and dual bounds for those. It has been previously shown that pruning performed during the compilation of those decision diagrams can greatly impact the quality of the bounds, and consequently the performance of the branch-and-bound algorithm. In this paper, we study the integration of dominance rules inside the decision diagram-based optimization framework. We propose a modeling language for consistently formulating dominance rules for dynamic programming models, and describe how they can be exploited to systematically detect and prune dominated nodes during the search. Furthermore, we explain how to combine this additional filtering mechanism with caching techniques to further improve the performance of the algorithm. Dominance rules are shown to significantly reduce the number of nodes expanded and the running time of the algorithm on four optimization problems.

09:40
LEO: Learning Efficient Orderings for Multiobjective BDDs

ABSTRACT. Approaches based on Binary decision diagrams (BDDs) have recently achieved state-of-the-art results for multiobjective integer programming problems. The variable ordering used in constructing BDDs can have a significant impact on their size and on the quality of bounds derived from relaxed or restricted BDDs for single-objective optimization problems. We first showcase a similar impact of variable ordering on the Pareto frontier (PF) enumeration time for the multiobjective knapsack problem, suggesting the need for deriving variable ordering methods that improve the scalability of the multiobjective BDD approach. To that end, we derive a novel parameter configuration space based on variable scoring functions which are linear in a small set of interpretable and easy-to-compute variable features. We show how the configuration space can be efficiently explored using black-box optimization, circumventing the curse of dimensionality (in the number of variables and objectives), and finding good orderings that reduce the PF enumeration time. However, black-box optimization approaches incur a computational overhead that outweighs the reduction in time due to good variable ordering. To alleviate this issue, we propose LEO, a supervised learning approach for finding efficient variable orderings that reduce the enumeration time. Experiments on benchmark sets from the knapsack problem with 3-7 objectives and up to 80 variables show that LEO is 3x and 1.3x faster at PF enumeration than lexicographic ordering and algorithm configuration, respectively. Our code and instances are available at~\url{https://github.com/khalil-research/leo}.

10:00
Probabilistic Lookahead Strong Branching via a Stochastic Abstract Branching Model [BEST STUDENT PAPER]

ABSTRACT. Strong Branching (SB) is a cornerstone of all modern branching rules in the Branch-and-Bound (BNB) algorithm, which is at the center of Mixed-Integer Programming solvers. In its full form, SB attempts to branch on all variables and selects the best to actually branch on, leading to small trees, but high runtimes. State-of-the-art branching rules therefore use SB with computational working limits to achieve both small enough trees and short run times. So far, these working limits have been established empirically. In this paper, we introduce a theoretical approach to guide on how much SB to use at each node within the BNB. We start by representing the BNB algorithm as an abstract stochastic tree model where the dual gains of all variables follow a given probability distribution. This model allows us to relate expected dual gains to tree sizes and explicitly compare the cost of sampling an additional SB candidate with the reward in expected tree size reduction. We then leverage the insight from the abstract model to design a new stopping criterion for SB, which fits a distribution to the dual gains and, at each node, dynamically continues or interrupts SB. This algorithm, which we refer to as Probabilistic Lookahead Strong Branching, improves both the tree size and runtime over MIPLIB instances, providing evidence that the method not only changes the amount of SB but allocates it better.

10:30-10:50Break with Refreshments
10:50-12:10 Session 17: MIP + Optimization techniques in Neural Networks
10:50
Certifying MIP-based Presolve Reductions for 0-1 Integer Linear Programs

ABSTRACT. It is well-known that reformulating the original problem can be crucial for the performance of mixed-integer programming (MIP) solvers. To ensure correctness, all transformations must preserve the feasibility status and optimal value of the problem, but there is currently no established methodology to express and verify the equivalence of two mixed-integer programs. In this work, we take a first step in this direction by showing how the correctness of MIP presolve reductions on 0–1 integer linear programs can be certified by using (and suitably extending) the VeriPB tool for pseudo-Boolean proof logging. Our experimental evaluation on both decision and optimization instances demonstrates the computational viability of the approach and leads to suggestions for future revisions of the proof format that will help to reduce the verbosity of the certificates and to accelerate the certification and verification process further.

11:10
Optimization Over Trained Neural Networks: Taking a Relaxing Walk

ABSTRACT. Besides training, mathematical optimization is also used in deep learning to model and solve formulations over trained neural networks for purposes such as verification, compression, and optimization with learned constraints. However, solving these formulations soon becomes difficult as the network size grows due to the weak linear relaxation and dense constraint matrix. We have seen improvements in recent years with cutting plane algorithms, reformulations, and an heuristic based on Mixed-Integer Linear Programming (MILP). In this work, we propose a more scalable heuristic based on exploring global and local linear relaxations of the neural network model. Our heuristic is competitive with a state-of-the-art MILP solver and the prior heuristic while producing better solutions with increases in input, depth, and number of neurons.

11:30
Bound Tightening using Rolling-Horizon Decomposition for Neural Network Verification

ABSTRACT. Neural network verification aims at providing formal guarantees on the output of trained neural networks, to ensure their robustness against adversarial examples and enable their deployment in safety-critical applications. This paper introduces a new approach to neural network verification using a novel mixed-integer programming rolling-horizon decomposition method. The algorithm leverages the layered structure of neural networks, by employing optimization-based bound-tightening on smaller sub-graphs of the original network in a rolling-horizon fashion. This strategy strikes a balance between achieving tighter bounds and ensuring the tractability of the underlying mixed-integer programs. Extensive numerical experiments, conducted on instances from the VNN-COMP benchmark library, demonstrate that the proposed approach yields significantly improved bounds compared to existing effective bound propagation methods. Notably, the parallelizable nature of the proposed method proves effective in solving open verification problems. Our code is built and released as part of the open-source mathematical modeling tool Gravity (\url{https://github.com/coin-or/Gravity}), which is extended to support generic neural network models.

11:50
Robustness Verification in Neural Networks

ABSTRACT. In this paper we investigate formal verification problems for Neural Network computations. Of central importance will be various robustness and minimization problems such as: Given symbolic specifications of allowed inputs and outputs in form of Linear Programming instances, one question is whether there do exist valid inputs such that the network computes a valid output? And does this property hold for all valid inputs? Do two given networks compute the same function? Is there a smaller network computing the same function?

The complexity of these questions have been investigated recently from a practical point of view and approximated by heuristic algorithms. We complement these achievements by giving a theoretical framework that enables us to use constraint methods to tackle security and efficiency questions in neural networks and analyze their computational complexities. We show that the problems are conquerable in a semi-linear setting, meaning that for piecewise linear activation functions and when the sum- or maximum metric is used, most of them are in P or in NP at most.

12:10-12:25 Session 18: Spotlight Talks for Posters III
12:10
End-to-End Classical Planning using CP and Belief Propagation

ABSTRACT. Classical planning is one of the original core AI research areas. It is the challenge of finding a sequence of actions that transforms a given initial situation into one that satisfies a given goal description. In optimal classical planning, the plan must minimize the sum of its action costs. Most of today’s strongest planners are based on state-space search with goal distance estimators, called heuristics. In this work, we leave the beaten path of state-space search planning and instead solve planning tasks with constraint programming (CP). We do so firstly in order to obtain a planning system that allows users to input their plan constraints in a declarative way. Our second motivation is to use the power of CPBP solvers to obtain a system that scales better to large planning tasks.

12:15
An Imitation Learning approach for Personnel Scheduling with flexibility for On-Call Shifts

ABSTRACT. Predictive models are playing an increasingly pivotal role in optimizing decision-making. This paper introduces an innovative approach based on DAgger, an imitation learning algorithm to train a policy iteratively with data collected from experts for stochastic sequential decision problems. Such problems can be hard, and a good expert may be expensive or unavailable. We focus on designing a good expert within the DAgger framework, which will based on deterministic solutions obtained from a sequence of contextual scenarios generated at each decision epoch. Subsequently, a predictive model is constructed that imitates the expert to facilitate real-time decision-making. The applicability of this methodology is demonstrated through a novel dynamic employee call-timing problem for the scheduling of casual personnel for on-call work shifts. The critical decision in the problem involves determining the optimal time to wait before placing a call to the next employee in the order of seniority, allowing them to select a preferred work shift. The uncertainty is associated with the time taken by an employee to respond and choose a work shift. The aim is to strike a balance between minimizing schedule changes induced by sending too many notifications or calls early in the planning period and having unassigned shifts due to sending calls/notifications too late. In contrast to conventional predict-and-optimize approaches, our methodology leverages optimization to train learning models that establish connections between the current state of the system and the wait time from the expert.

12:20
End-to-End Verification for Subgraph Solving

ABSTRACT. Modern subgraph-finding algorithm implementations consist of thousands of lines of highly optimized code, and this complexity raises questions about their trustworthiness. Recently, some state-of-the-art subgraph solvers have been enhanced to output machine-verifiable proofs that their results are correct. While this significantly improves reliability, it is not a fully satisfactory solution, since end-users have to trust both the proof checking algorithms and the translation of the high-level graph problem into a low-level 0-1 integer linear program (ILP) used for the proofs.

In this work, we present the first formally verified toolchain capable of full end-to-end verification for subgraph solving, which closes both of these trust gaps. We have built encoder frontends for various graph problems together with a 0-1 ILP (a.k.a. pseudo-Boolean) proof checker, all implemented and formally verified in the CakeML ecosystem. This toolchain is flexible and extensible, and we use it to build verified proof checkers for decision and optimization graph problems. Our experimental evaluation shows that end-to-end formal verification is now feasible for a wide range of hard graph problems.

This extended abstract is an abbreviated version of our paper at the 38th Annual AAAI Conference on Artificial Intelligence (AAAI-24) with the same title.

12:25-13:40Lunch + Posters III
13:40-15:00 Session 19: ML4CO in Stochastic and Uncertain Settings
13:40
Learning to Solve Job Shop Scheduling under Uncertainty

ABSTRACT. Job-Shop Scheduling Problem (JSSP) is a combinatorial optimization problem where tasks need to be scheduled on machines in order to minimize criteria such as makespan or delay. To address more realistic scenarios, we associate a probability distribution with the duration of each task. Our objective is to generate a robust schedule, i.e. that minimizes the average makespan. This paper introduces a new approach that leverages Deep Reinforcement Learning (DRL) techniques to search for robust solutions, emphasizing JSSPs with uncertain durations. Key contributions of this research include: (1) advancements in DRL applications to JSSPs, enhancing generalization and scalability, (2) a novel method for addressing JSSPs with uncertain durations. The Wheatley approach, which integrates Graph Neural Networks (GNNs) and DRL, is made publicly available for further research and applications.

14:00
Improving Metaheuristic Efficiency for Stochastic Optimization Problems by Sequential Predictive Sampling

ABSTRACT. Metaheuristics are known to be effective in finding good solutions in combinatorial optimization, but solving stochastic problems is costly due to the need for evaluation of multiple scenarios. We propose a general method to reduce the number of scenario evaluations per solution and thus improve metaheuristic efficiency. We use a sequential sampling procedure exploiting estimates of the solutions' expected objective values. These values are obtained with a predictive model, which is founded on an estimated discrete probability distribution linearly related to all solutions' objective distributions; the probability distribution is continuously refined based on incoming solution evaluation. The performance of the proposed method is compared to descriptive sampling and an adaptation of a sequential sampling method assuming noisy evaluations. Experimental results on three problems indicate the proposed method is robust overall, and dominates the baselines on two of the problems.

14:20
Learning Deterministic Surrogates for Robust Convex QCQP

ABSTRACT. Decision-focused learning is a promising development for contextual optimisation. It enables us to train prediction models that reflect the conditional sensitivity and uncertainty structure of the problem. However, there have been limited attempts to extend this paradigm to robust optimisation. We propose a double implicit layer model for training prediction models with respect to robust decision loss in uncertain convex quadratically constrained quadratic programs (QCQP). The first layer solves a deterministic version of the problem, the second layer evaluates the worst case realisation for an uncertainty set centred on the observation given the decisions obtained from the first layer. This enables us to learn model parameterisations that lead to robust decisions while only solving a simpler deterministic problem at test time. Additionally, instead of having to solve a robust counterpart we solve two smaller and potentially easier problems in training. The second layer (worst case problem) can be seen as a regularisation approach for predict-and-optimise by fitting to a neighbourhood of problems instead of just a point observation. We motivate a reformulation of the worst-case problem for a case of uncertainty sets that would otherwise lead to trust region problems. Both layers are typically strictly convex in this problem setting and thus have meaningful gradients almost everywhere. We demonstrate an application of this model on simulated experiments. The method is an effective regularisation tool for decision-focused learning for uncertain convex QCQPs.

14:40
Learning From Scenarios for Repairable Stochastic Scheduling

ABSTRACT. When optimizing problems with uncertain parameter values in a linear objective, decision-focused learning enables end-to-end learning of these values. We are interested in a stochastic scheduling problem, in which processing times are uncertain, which brings uncertain values in the constraints, and thus repair of an initial schedule may be needed. Historical realizations of the stochastic processing times are available. We show how existing decision-focused learning techniques based on stochastic smoothing can be adapted to this scheduling problem. We include an extensive experimental evaluation to investigate in which situations decision-focused learning outperforms the state of the art for such situations: scenario-based stochastic optimization.

15:00-16:00 Session 20: Keynote III
15:00
Keynote: Optimization and machine learning on quantum computers: yes/no/maybe?

ABSTRACT. Abstract: We would like to solve difficult optimization problems -- perhaps to allocate resources, perhaps to train a machine learning model -- and we would like to do it quickly. Is it reasonable to hope that quantum computers can help us do that? In this talk we discuss the current state of quantum optimization and quantum machine learning research, for problems that are classically defined: the problems are described on a classical (i.e., non-quantum) computer, and the solution that we are looking for is also classical. We will see that for some problems quantum algorithms are promising, even if only as an alternative to classical algorithms with different tradeoffs, but in other cases there has been no convincing evidence of their usefulness so far.

Bio: Giacomo Nannicini is an associate professor in the Industrial & Systems Engineering department at the University of Southern California, which he joined in 2022. Prior to that, he was a research staff member in the quantum algorithms group at the IBM T. J. Watson Research Center, and an assistant professor in the Engineering Systems and Design pillar at the Singapore University of Technology and Design. His main research interest is optimization and its applications. Giacomo received several awards, including the 2021 Beale--Orchard-Hays prize, the 2015 Robert Faure prize, and the 2012 Glover-Klingman prize.

16:00-16:40Closing Remarks and Break with Refreshments