PGMODAYS 2022: PGMODAYS 2022
PROGRAM FOR WEDNESDAY, NOVEMBER 30TH
Days:
previous day
all days

View: session overviewtalk overview

09:10-10:00 Session 10: Plenary lecture
Location: Auditorium
09:10
Dyadic Linear Programming

ABSTRACT. A rational number is dyadic if it is an integer multiple of 1/2k for some nonnegative integer k. Dyadic numbers are important for numerical computations because they have a finite binary representation, and therefore they can be represented exactly on a computer in floating-point arithmetic.When real or rational numbers are approximated by dyadic numbers on a computer, approximation errors may propagate and accumulate throughout the computations. So it is natural to ask when linear programs have dyadic optimal solutions. We address this question in joint work with Ahmad Abdi, Levent Tuncel, and Bertrand Guenin.

10:00-10:50 Session 11: Plenary lecture
Location: Auditorium
10:00
Bilevel Optimization Under Uncertainty

ABSTRACT. Bilevel optimization problems can serve as a powerful tool for modeling hierarchical decision making processes. The resulting problems are highly challenging to solve---both in theory and practice. Fortunately, there have been significant algorithmic advances in the field of bilevel optimization so that we can solve much larger and also more complicated problems today compared to what was possible to solve two decades ago.

This results in more and more challenging bilevel problems that researchers try to solve today. In this talk I will give an overview of one of these more challenging classes of bilevel problems: bilevel optimization under uncertainty. We will discuss classical ways of addressing uncertainties in bilevel optimization using stochastic or robust techniques. Moreover, the sources of uncertainty in bilevel optimization can be much richer than for usual, single-level problems, since not only the problem's data can be uncertain but also the (observation of the) decisions of the two players can be subject to uncertainty. Thus, we will also discuss bilevel optimization under limited observability, the area of problems considering only near-optimal decisions, and intermediate solution concepts between the optimistic and pessimistic cases. The talk is based on the article by Yasmine Beck, Ivana Ljubić, Martin Schmidt: A Survey on Bilevel Optimization Under Uncertainty, https://optimization-online.org/2022/06/8963/

10:50-11:20Coffee Break
11:20-12:50 Session 12A: Invited session: Bundle Methods: Methodology and Applications to Energy Optimization
Location: Amphi I
11:20
Minimizing the difference of convex and weakly convex functions via bundle method
PRESENTER: Ksenia Syrtseva

ABSTRACT. We consider optimization problems with objective and constraints being the difference of convex and weakly convex functions. This framework allows covering a vast family of nonsmooth and nonconvex optimization problems, in particular, those involving Difference-of-Convex (DC) functions with known or unknown DC decomposition, functions whose gradient is Lipschitz continuous, as well as problems that comprise certain classes of composite and nonconvex value functions. We investigate several stationary conditions and present a proximal bundle method to compute critical points for problems of this class. Our algorithm, which employs an improvement function and an original rule to update the proximal parameter to ensure convergence, relies on cutting-plane approximations of the convex functions and linearizations of the weakly convex ones to construct a sequence of convex quadratic subproblems yielding new iterates. The practical performance of the method is illustrated by numerical experiments on some nonconvex stochastic problems.

11:50
Asynchronous Bundle Methods

ABSTRACT. Efficiently optimizing complex nondifferentiable functions, composed by a sum of a large number of terms the computation of each of which may be costly, is a crucial task in many applications such as energy optimization. The use of HPC architectures may be required to obtain appropriate performances, but the existing parallelisation approaches, mostly based on the standard master-slave paradigm, do not scale well to a large number of cores. This has justified the recent surge of interest in asynchronous approaches that may allow a higher scalability. We propose a very general asynchronous approach that relies on the availability of multiple resources not only for the "oracles" (black-box) that compute the function components, but also of multiple master problems, ran in parallel, to provide a stream of potential iterates for the oracles. Each of the master problems is in principle a separate algorithm, typically one of the many variants of bundle methods with some specific setting for its (numerous) algorithmic parameters, which would typically converge towards an (approximate) optimal solution if ran in isolation. We add to the mix a principal entity managing their interaction, and we examine how different capabilities of the principal entity must be handled in order to ensure that the overall asynchronous approach maintains the good convergence properties of the underlying algorithms while allowing harmonious collaboration between them towards the goal of more efficiently solving the problem. We show how our approach can work with a very wide selection of the practical nondifferentiable optimization algorithms proposed in the literature.

12:20
Stochastic Dual Dynamic Programming and Lagrangian decomposition for Seasonal Storage Valuation in SMS++

ABSTRACT. The increase in the share of renewable energy sources in the power system, especially intermittent ones like solar and wind, brings several challenges. As both energy production and demand vary throughout the year (in winter, for example, there is a reduction in the supply of solar energy while the demand for energy for heating increases), energy storage becomes a relevant factor to take advantage of excess energy produced in certain seasons of the year and respond to increased energy demand in others. An important system for seasonal storage is that formed by cascading hydroelectric reservoirs. The management of such systems is a challenging task which hinges on a good assessment of the future value of stored energy (water) in these systems. In order to assess the value of water, and thus be able to properly manage these reservoir systems, a large-scale multi-stage stochastic problem spanning a year must be solved, where each stage is a weekly unit-commitment problems. Since the unit-commitment problems are non-convex in nature, they make the seasonal storage valuation problem unsuitable for what would otherwise be the most natural approach to solve it, i.e., Stochastic Dual Dynamic Programming. In this work we exploit the natural convexification capabilities of Lagrangian relaxation to devise a Stochastic Dual Dynamic Programming approach to the seasonal storage valuation problem where the Lagrangian Dual of the single-stage subproblems is solved (using a Bundle-type approach), which corresponds to solving the convexified relaxation of the original problem. This is known to be at least as tight as, and typically strictly tighter than, the standard way to convexify a stochastic MINLP amenable to the Stochastic Dual Dynamic Programming approach, i.e., the continuous relaxation. We report on extensive experiments which show that these huge-scale stochastic problems can indeed be solved on HPC architectures. This is made possible by the use of the SMS++ software framework (https://smspp.gitlab.io) for large-scale problems with multiple nested forms of structure, and in particular by its native parallel capabilities that allow to exploit two different forms of parallelisation (between scenarios in the SDDP approach and within subproblems in the Lagrangian one) at the same time.

11:20-12:50 Session 12B: Invited session: New developments in optimal control theory, part I
Location: Auditorium
11:20
Neural Networks for First Order HJB Equations

ABSTRACT. We consider a deterministic optimal control problem, in a finite horizon context, and propose deep neural network approximations for Bellman's dynamic programming principle \cite{bok-pro-war-22}, corresponding also to some first-order Hamilton-Jacobi-Bellman (HJB) equations. This work follows the lines of Huré {\em et al.} (SIAM J. Numer. Anal., vol. 59 (1), 2021, pp. 525-557) where algorithms are proposed in a stochastic context. However, we need to develop a completely new approach in order to deal with the propagation of errors in the deterministic setting, where no diffusion is present in the dynamics. Our analysis gives precise error estimates in an average norm. The study is then illustrated on several academic numerical examples related to front propagations models in the presence of obstacle constraints (modelized by an optimal maximum running cost problem) showing the relevance of the approach for average dimensions (e.g. from $2$ to $8$), even for non-smooth value functions.

11:50
Hybrid optimal control including a regionally constant parameter and applications
PRESENTER: Anas Bouali

ABSTRACT. In this presentation, we consider a Hybrid optimal control problem of Mayer type in which the controlled system is defined over a partition of the Euclidean space into a countable number of strata. In addition, we suppose that the dynamics depends on a regionally switching parameter. This means that the parameter should remain constant in every stratum (but not necessarily over the time period). The motivation behind this framework comes from multiple applications related to aerospace engineering or epidemiology, typically when a loss of control occurs depending on the position of the system in the state space. Our goal is to derive the necessary optimality conditions in this new framework in the form of a Pontryagin principle. This is made possible thanks to “regional” needle perturbations of a nominal control in this hybrid setting and a careful sensitivity analysis, see [1]. Next, based on [2], we will highlight these optimality conditions by presenting several examples in which a loss of control occurs (depending on the stratum to which the system belongs).

12:20
Viscosity solutions of first order Hamilton-Jacobi equations in locally compact Hadamard spaces
PRESENTER: Othmane Jerhaoui

ABSTRACT. We propose a novel approach to study first order Hamilton-Jacobi equations in a certain class of metric spaces called locally compact Hadamard spaces.

11:20-12:50 Session 12C: Invited session: Pricing problems in energy management
Chair:
Location: Amphi II
11:20
An exact method for a problem of time-slot pricing
PRESENTER: Olivier Bilenne

ABSTRACT. We consider a market where a population of customers is willing to purchase a service offered at n different times y_1,y_2,...,y_n under limited capacities. Our task is to anticipate and to implement a pricing profile (p_1,...,p_n) that will optimally spread the expected hourly demand for the service. The time slot pricing problem is looked upon from the perspective of the service provider, in a competitive environment where profit maximization is a prime concern for the company. Each customer has a preferred time x ∈ ℝ for receiving the service, and is expected to either choose a time slot j that minimizes their total incurred cost p_j + d(x-y_j), where the strictly convex function d is an expression of the inconvenience of rescheduling from their preferred time; or to reject the service if no time slot looks satisfying. Our goal is to implement a pricing profile which maximizes the profit of the service provider by solving a bi-level decision problem sharing similarities with the optimal transport framework. We develop an exact method of solution based on dynamic programming that produces a profit-maximizing pricing profile in poly(n,|P|) basic operations if the price values are available at each time slot for decision making are taken from a set P. The proposed algorithm can accommodate distinct classes of customers with a computational overhead linear in the number of distance functions. Extensions to differenciated or random service times are straightforward. Our approach also applies for P ≡ ℝ through discretization of the pricing space, then producing near-optimal solutions together with lower and upper bounds for the optimal value of the initial continuous-price problem. Under strongly convex d, pointwise convergence of these bounds towards the optimum typically occurs at rate O(δ) linear in the discretization step δ, slowing down to O(√δ) in some irregular cases where ε-optimality of the profit can still be guaranteed within polynomial computation time. Numerical experiments are under way for a problem of time slot pricing at electric vehicle charging stations.

11:50
A Rank-Based Reward between a Principal and a Field of Agents: Application to Energy Savings
PRESENTER: Quentin Jacquet

ABSTRACT. We consider a problem where a Principal aims to design a reward function to a field of heterogeneous agents. In our setting, the agents compete with each other through their rank within the population in order to obtain the best reward. We first explicit the equilibrium for the mean-field game played by the agents, and then characterize the optimal reward in the homogeneous setting. For the general case of a heterogeneous population, we develop a numerical approach, which is then applied to the specific case study of the market of Energy Saving Certificates.

12:20
Intelligent Pricing in Electric Networks
PRESENTER: Mandar Datar

ABSTRACT. The model. There are M pricing programs. The price corresponding to each m consists of a fix subscribing fee plus a congestion cost which is a function of the total electricity that is used in that pricing class. There are N players where player n has a demand d(n). A player n has to decide how much electricity x(n,m) to buy from each m of the M pricing classes. The cost for player i for buying a unit of electricity from class m is given by a function f of the total amount of electricity that is bought in program m. It thus has the form of a function f of the player i and of x(m), which is defned (with some abuse of notation) to be the sum of fows x(n,m) over all players N. We call f(i,x(m)) the cost density for player i at class m. Unless stated explicitely, the cost density of a player i will be assumed not to depend on the player and i is then dropped from the notation. We assume that the cost for each player n is additive, i.e the overall cost for player n is the sum of the cost densities f(x(m)) weighted by the fow x(n,m). The cost C(n,x) for player n is thus given by the sum over classes m of x(n,m)f(x(m)).

Defnition of constrained games and equilibria. Defne y(x) to be a vector whose n-th component y(n,x) is the best response of player n to x[-n] (i.e. it minimizes C(n,x)). When the minimization is over all x that are nonnegative and satisfy the demand constraints for all players ; then this defines a standard Nash equilibrium. When the minimization is with respect to a restricted subset of fows then this defnes a constrained Nash equilibrium. Consider in particular capacity constraints : for each class m, the total fow using classs m is bounded by a constant d(m). Rence the sum over m of x(n,m) is bounded by d(m). These are correlated constraints in the sense of Rosen [1] and the game is a special case of the so called generalized equilibrium. For any player n, and any x, y(n,x) is a best response against x if and only if the corresponding KKT conditions hold. At equilibrium, the best response is optimal at n and x if the prices are chosen as the Lagrange multipliers. We call this a KKT pricing. A fxed point is shown to exist and it therefore has stability properties. It corresponds to an equilibrium in the constrained game. A desirable property of the equilibrium is that the Lagrange multipliers be strongly scalable for billing purposes in the following sense. The price (or the Lagrange multiplier) that corresponds to the cost for player n when using program m depends only on m and on x(m). Rence for strongly scalable billing it is not needed to inspect how much elecricity originates from each source and which player requested a given amount of Ampers. In a strongly scalable equilibrium, billing is done in a fully symmetric way.

Scalable billing and normalized equilibrium. We study in the full paper also the case of several classes of players. We show that there exist Lagrange multipliers that are the same for all players within each given class. The resulting equilibrium is called weakly-scalable. A scalable (strong or weak) pricing is a special case of "normalized equilibrium" defned in [1]. We fnally identify cost functions that extend convexity to a game setting ; they are said to satisfy the strict diagonal convexity property which is shown to imply the existence and uniqueness of a normalized equilibrium (for each weight vector defning the costs) and hence the existence of a scalable pricing.

[1] Existence and uniqueness of equilib points for concave n-person games, JB Rosen -Econometrica 1965.

We thank the support of the FMJR Program PGMO and the support to this program by EDF.

11:20-12:50 Session 12D: Contributed session: Optimal transport and beyond
Location: A1.116
11:20
Risk management via optimal (partial) transport

ABSTRACT. In a variety of problems in operations research, a variable of interest $b=b(x_1,x_2,\dots,x_d)$ depends on several underlying random variables, whose individual distributions are known but whose joint distribution is not. A natural example arises when the variables represent parameters in a physical system, whose individual distributions can be estimated empirically or through modeling (or a combination of both) but whose dependence structure cannot; an example of this flavour, originating in \cite{iooss2015review}, in which the output $b$ represent the simulated height of a river at risk of flooding, and the underlying variables include various design parameters and climate dependent factors. Metrics used in risk management depend on the distribution of the output variable $b$, and therefore, in turn, on the joint distribution of the $x_i$. A natural problem is therefore to determine bounds on these metrics; that is, to maximize the given metric over all possible joint distributions of the $x_i$ with known marginal distributions. In this talk, we show that for a large class of metrics, the maximization can in fact be formulated as a traditional multi-marginal optimal transport problem with $d+1$ marginals: the given marginals distributions of the $x_i$ as well as another distribution arising from the particular form of the risk metric. In particular, in the special but important case of conditional value at risk, the problem further reduces to a multi-marginal partial transport problem on the $d$ original distributions.

When the underlying variables $x_i$ are all one dimensional, this allows us to derive a nearly explicit characterization of solutions in a substantial class of relevant problems, and facilitates the use of a very broad range of computational methods for optimal transport problems for others.

11:50
A simple city equilibrium model with an application to teleworking
PRESENTER: Quentin Petit

ABSTRACT. We propose a simple semi-discrete spatial model where rents, wages and the density of population in a city can be deduced from free-mobility and equilibrium conditions on the labour and residential housing markets. We prove existence and (under stronger assumptions) uniqueness of the equilibrium. We extend our model to the case where teleworking is introduced. We present numerical simulations which shed light on the effect of teleworking on the structure of the city at equilibrium.

12:20
Feature Projection for Optimal Transport

ABSTRACT. Optimal transport is now a standard tool for solving many problems in statistics and machine learning. The optimal “transport of probability measures” is also a recurring theme in stochastic control and distributed control, where in the latter application the probability measure corresponds to an empirical distribution associated with a large collection of distributed agents, subject to local and global control. The goal of this paper is to make precise these connections, which inspires new relaxations of optimal transport for application in new and traditional domains. The proposed relaxation replaces a target measure with a “moment class”: a set of probability measures defined by generalized moment constraints. This is motivated by applications to control, outlier detection, and to address computational complexity. We illustrate the approach on two applications: i) MNIST digit classification using a certain distance and ii) signal tracking by controlling the consumption of a set of consumers in an electricity network.

11:20-12:20 Session 12E: Contributed session: Piecewise-linear geometry and applications
Location: A1.122
11:20
Degree Theory for the Linear Complementarity Problem and related Piecewise Linear Systems
PRESENTER: Manuel Radons

ABSTRACT. Let $A$ be a $n\times n$ real matrix. The piecewise linear equation system $z-A\vert z\vert =b$ is called an absolute value equation (AVE). It is well-known to be equivalent to the linear complementarity problem. Unique solvability of the AVE is known to be characterized in terms of a generalized Perron root called the sign-real spectral radius of $A$. For mere, possibly non-unique, solvability no such characterization exists. We close this gap in the theory. That is, we define the concept of the aligned spectrum of $A$ and prove, under some mild genericity assumptions on $A$, that the mapping degree of the piecewise linear function $F_A:\mathbb{R}^n\to\mathbb{R}^n\,, z\mapsto z-A\lvert z\rvert$ is congruent to $(k+1)\mod 2$, where $k$ is the number of aligned values of $A$ which are larger than $1$. We also derive an exact---but more technical---formula for the degree of $F_A$ in terms of the aligned spectrum. Finally, we derive the analogous quantities and results for the LCP.

11:50
Local minima and their basins of attractions for piecewise affine functions with polyhedral support
PRESENTER: Florin Stoican

ABSTRACT. Piecewise affine functions with a polyhedral support appear often in control/optimization applications (usually as simplifications of originally nonlinear functions but sometimes directly from the problem requirements). Whenever their support is generated by a hyperplane arrangement partitioning, interesting assertions may be made about their structure. Specifically we are interested in: i) reduced-complexity representations which provide equivalent descriptions; and ii) the location of minima points and their associated basins of attraction. We consider two applications of piecewise functions : i) we study how such constructs may be used in potential field descriptions ; and ii) how they relate to the neural networks with ReLU activation functions.

11:20-12:50 Session 12F: Contributed session: Black-box and evolutionary optimization: Part I
Location: A1.128
11:20
Parameter-Dependent Performance Bounds for Evolutionary Algorithms

ABSTRACT. It has long been observed that the performance of evolutionary algorithms and other randomized search heuristics can benefit from a non-static choice of the parameters that steer their optimization behavior. Mechanisms that identify suitable configurations on the fly (“parameter control”) or via a dedicated training process (“dynamic algorithm configuration”) are thus an important component of modern evolutionary computation frameworks. Several approaches to address the dynamic parameter setting problem exist, but we barely understand which ones to prefer for which applications. As in classical benchmarking, problem collections with a known ground truth can offer very meaningful insights in this context. Unfortunately, settings with well-understood control policies are very rare.

One of the few exceptions for which we know which parameter settings minimize the expected runtime is the LeadingOnes problem. We extend this benchmark by analyzing optimal control policies that can select the parameters only from a given portfolio of possible values. This also allows us to compute optimal parameter portfolios of a given size. We demonstrate the usefulness of our benchmarks by analyzing the behavior of the DDQN reinforcement learning approach for dynamic algorithm configuration.

11:50
The Satellite Constellation Design Problem via MI(N)LP Boosted with a Genetic Algorithm
PRESENTER: Luca Mencarelli

ABSTRACT. In a recent work [1], we introduced several Mixed-Integer (Non)Linear Programming (MI(N)LP) formulations for the satellite constellation design problem with discontinuous coverage and constrained revisit time. This problem represents an emerging application in aerospace engineering and consists in finding the minimal number of satellites and their orbital parameters such that the resulting constellation observes several targets on the Earth surface within regular time intervals (see, for instance, [2, 3, 4]). In this talk, we keep on exploring the potentialities of operational research approaches for such an application by : • describing new (more realistic) MI(N)LP formulations with a new definition for the satellite coverage area ; • boosting our MI(N)LP formulations with a rather simple Genetic Algorithmic approach to provide better initial feasible (yet not necessarily optimal) solutions. We present and discuss preliminary results, comparing the plain MI(N)LP formulations with their corresponding boosted versions, showing the computational benefits both in terms of CPU times and solution quality.

References [1] L. Mencarelli, J. Floquet, F. Georges, and D. Grenier, Mixed integer (non)linear approaches for the satellite constellation design problem, Optimization and Engineering, 2022. [2] H. W. Lee, S. Shimizu, S. Yoshikawa, and K. Ho, Satellite Constellation Pattern Optimization for Complex Regional Coverage, Journal of Spacecraft and Rockets, 57(6):1309–1327, 2020. [3] N. Hitomi, and D. Selva, Constellation optimization using an evolutionary algorithm with a variable-length chromosome, IEEE Aerospace Conference, 1–12, 2018. [4] T. Savitri, Y. Kim, S. Jo, and H. Bang, Satellite constellation orbit design optimization with combined genetic algorithm and semianalytical approach, International Journal of Aerospace Engineering, 2017:1235692, 2017.

12:20
Two-Dimensional Drift Analysis: Optimizing Two Functions Simultaneously Can Be Hard

ABSTRACT. The performance of Evolutionary Algorithms (EAs) in dynamic environments, that is, environments in which the fitness function changes over time, has recently been studied (e.g. [2]). In this talk, we develop and analyze a minimal example TwoLin [1] of a dynamic environment that can be hard for EAs. The environment consists of two linear functions f_1 and f_2 with positive weights 1 and n, and in each generation selection is based on one of them at random. They only differ in the set of positions that have weight 1 and n. We show that the (1 + 1)-EA with mutation rate χ/n is efficient for small χ on TwoLin, but does not find the shared optimum in polynomial time for large χ. To obtain the above result, we need to apply drift analysis in two dimensions. Drift analysis is one of the most powerful techniques to analyse the performance and the behaviour of EAs. Previously, it has only been applied in one dimension. Here, we are in the case of two random variables X_1, X_2, and the drift is approximatively given by A·(X_1, X_2)^T for a matrix A. The non-trivial case is that X_1 and X_2 impede each other’s progress, and we give a full characterisation of this case.

11:20-12:50 Session 12G: Contributed session: Game theory and beyond
Location: A1.133
11:20
One-to-Many Matching Games

ABSTRACT. The seminal work of Hatfield and Milgrom (2005) extended many models of stable matching (GS62, SS71, CK81, KC82, DG85) to the setting in which doctors and hospitals are matched and, simultaneously, choose a contract from a finite set of options. The authors proved the existence of stable allocations under substitute contracts using a cumulative offer mechanism (COM).

Matching games add a new dimension to this problem by allowing the agents within a coalition to play strategic games and receive payments as outputs. They naturally faces two stability notions: external stability, the generalization of Hatfield and Milgrom's stability, and internal stability, a novel notion concerning the agents' deviations in actions within each game.

Efficient algorithms are designed to compute externally and internally stable allocations under classical game theory assumptions and substitute strategies. First, we propose a generalization of the classical deferred-acceptance algorithm of Gale and Shapley to compute externally stable allocations, in particularly proving that simpler algorithms than COM are enough to achieve the task. Second, a novel strategy profiles modification algorithm is designed to compute an internally stable allocation whenever the strategic games played satisfy a feasibility condition and the algorithm converges.

Feasibility in games is a novel property. We characterize it using constrained Nash equilibria and prove that many well-known games from the literature of game theory are feasible.

11:50
Evolutionary Game of Coordination for Formation Flying

ABSTRACT. Many applications such as Search And Rescue missions require autonomous drones to operate in a coordinated fashion. Distributed control techniques make it possible to fly swarm of drones autonomously without a leader agent. In order to maximize the use of drone swarms it is necessary to carry out specific types of formation (flying in a square, forming a line and so on). The type of formation chosen evolves dynamically during the scenario according to the evolution of the environment. In the absence of a leader agent, it is necessary to set up a coordination process between agents to ensure that selfish agents converge towards the same geometric formation. The different possible formations are seen as the pure strategies of an evolutionary game of coordination. The evolutionary coordination game aims to converge during an evolutionary process a large population of players towards the choice of an identical strategy. However, the evolutionary game of coordination has several equilibria, some equilibria are stable, others unstable, some equilibria are pure strategies, others are mixed strategies. In the context of formation flying, it is a question of ensuring that the evolutionary process and the communication between agents makes it possible to converge towards a desirable equilibrium from an operational point of view. The equilibrium has to be stable and of pure strategy type to be applied to a small swarm (even if optimized considering a large population of agents). It is therefore a question of setting up a population dynamic between the agents of the swarm in such a way as to favor certain equilibria. The convergence and stability of this modified population dynamics are then analysed.

12:20
Generalized Nash Fairness solutions for Bi-Objective Minimization Problems
PRESENTER: Minh Hieu Nguyen

ABSTRACT. In this paper, we consider a special case of Bi-Objective Optimization (BOO), called Bi-Objective Minimization (BOM), where two objective functions to be minimized take only positive values. As well as for BOO, most methods proposed in the literature for solving BOM focus on computing the Pareto-optimal solutions that represent different trade-offs between two objectives. However, it may be difficult for a central decision-maker to determine the preferred solutions due to a huge number of solutions in the Pareto set. We propose a novel criterion for selecting the preferred Pareto-optimal solutions by introducing the concept of rho-Nash Fairness (rho-NF) solutions inspired from the definition of proportional fairness. The rho-NF solutions are the Pareto-optimal solutions achieving some proportional Nash equilibrium between the two objectives. The positive parameter rho is introduced to reflect the relative importance of the first objective to the second one. For this work, we will discuss some existential and algorithmic questions about the rho-NF solutions by first showing their existence for BOM. Furthermore, the set of rho-NF solutions can be a strict subset of the Pareto set. As there are possibly many rho-NF solutions, we focus on extreme rho-NF solutions achieving the smallest values for one of the objectives. Then, we propose two Newton-based iterative algorithms for finding extreme rho-NF solutions. Finally, we present computational results on some instances of the Bi-Objective Travelling Salesman Problem and the Bi-Objective Shortest Path Problem.

11:20-12:50 Session 12H: Contributed session: Combinatorial optimization -- polyhedral methods
Chair:
Location: A1.134
11:20
Projective Cutting Planes with multiple cuts per iteration for robust linear programming

ABSTRACT. Many research projects start from a problem and identify the most effective method to solve it. This work emerged the other way around. We considered an existing method [1] and searched for (new) problems it may solve efficiently. The new method aims to improve the widely-used Cutting-Planes, by upgrading its well-established separation sub-problem to a projection sub- problem: given some feasible x in some polytope P and direction d, what is the maximum t such that x + t · d ∈ P ? This question is easy if one can list all constraints of P , but it is (much) harder in a Cutting-Planes context where P has prohibitively-many constraints.

This projection sub-problem is more difficult to implement than the standard separation. So why should we bother, why should we care? Can’t say we should. But there may be some benefits down the road. Long story short, the new algorithm Projective Cutting-Planes generates interior points (feasible solutions) along the iterations. The classical Cutting-Planes proceeds only by removing infeasibility. The new one proceeds both by removing infeasibility and by producing feasibility. Suppose one has to choose one of the following constraints: (1) 10x 1 + 20x 2 + 6x 3 ≤ 30 (2) x 1 + 2x 2 + x 3 ≤ 3 When x ∈ R 3+ , constraint (2) implies (1). Yet, by applying the separation logic on x = [1 1 1], constraint (1) seems tighter because 10 + 20 + 6 − 30 > 1 + 2 + 1 − 3, i.e., the separation sub- problem returns (1) because it seems more violated by [1 1 1]. The projection sub-problem [0 0 0] → [1 1 1] leads to the pierce point [0.75 0.75 0.75] and returns constraint (2) which is tight for this pierce point. Constraint (1) is not tight because it reduces to 36 · 34 ≤ 30 or 27 ≤ 30. When we first submitted [2], we initially solved robust linear programming by comparing the classical Cutting-Planes and the new Projective Cutting-Planes. We generated in both cases only one cut per iteration, because we only wanted to compare the two methods. Pointing out that the most effective Cutting-Planes for this problem uses multiple cuts per iteration, a referee asked as if we could also adapt Projective Cutting Planes for a multi-cut setting. The answer was yes, and – a bit to our surprise – the multi-cuts variant of Projective Cutting-Planes was really faster. For example, it solved in 8 seconds the instance stocfor3, an instance that required thousands of seconds for all other Cutting Planes mentioned above.

References [1] Daniel Porumbel, Projective Cutting-Planes, SIAM Journal on Optimization, 30(1), 2020 [2] Daniel Porumbel, Projective Cutting-Planes for Robust Linear Programming and Cutting Stock Problems, INFORMS Journal of Computing, DOI 10.1287/ijoc.2022.1160

11:50
Overlapping decomposition in column generation
PRESENTER: Pierre Fouilhoux

ABSTRACT. To solve large scale integer programs, decomposition techniques such as column generation or Lagrangian relaxation deeply rely on the chosen structure induced by the dualized constraints. The overlapping decomposition (also referred to as Lagrangian Decomposition or Variable Splitting) is to simultaneously consider several substructures within the same decomposition method. Managing several substructures can be done by duplicating variable subsets which will then be generated through two distinct pricing methods, and linking the corresponding variables using equalities in the master problem.

In this work, an overlapping decomposition is proposed to provide strong linear relaxation values for two distinct problems within the framework of Dantzig-Wolfe decomposition: the Reliable Facility Location Problem and the Unit Commitment Problem (UCP).

To obtain an evaluation before the column generation convergence, we extend the classical lower bound computation to the overlapping decomposition context. For the UCP, we propose additional reinforcement inequalities.

Experimental results show the importance of designing the constraints linking the duplicated variables as inequalities rather than equalities: such inequalities are an important leverage to speed up convergence. The repartition of the dual cost between the two pricing subproblems also improves the convergence. We propose a way to simultaneously use these two concepts of linking inequalities and cost repartition.

12:20
On a Concept of a Generic Intersection Cut Callback

ABSTRACT. Intersection cut framework can generate valid linear inequalities for a nonconvex set $S$. The framework requires a corner polyhedron relaxation of $S$, and a convex $S$-free set, which does not contain any point of $S$ in its interior. In this talk, I will review the recent development of intersection cuts in MINLPs (e.g quadratic/polynomial programming and signomial programming), featuring the construction of a variety of $S$-free sets. On the other hand, implementation of intersection cuts requires much knowledge of a solver's data structure and numerical stability. Software engineering can help here, as the solver can encapsulate cut separation procedures and provides an intersection cut callback without the need for symbolic representation of $S$. Then, the users only need to provide defining-variables, zero- and first-order oracle access to the $S$-free set. I will also discuss some limitations of intersection cuts and open problems.

11:20-12:50 Session 12I: Contributed session: Optimal pricing and energy management
Location: A1.139
11:20
A branch-and-bound method for single-leader-multi-follower games with energy management applications
PRESENTER: Didier Aussel

ABSTRACT. A single-leader-multi-follower game (SLMFG) is a bilevel problem in which the lower level is composed of a non-cooperative (generalized) Nash game between a finite set of agents, called the followers. A classic approach to solve SLMFGs consists in replacing the followers' optimisation problem by their Karush--Kuhn--Tucker conditions, thus leading to the resolution of a mathematical problem with complementarity constraints (MPCC). It is well known that these MPCCs are quite hard to solve due to the non-convexity of the resulting constraints set, which, additionally, has a non-empty interior.

As an alternative to Big-$M$-based techniques, in this work we explore the applicability of the \emph{Special-Ordered-Set of Type 1} approach to SLMFGs with linear lower-level problems. Applications to two energy management problems will be considered as well. On the one hand, the optimal design of industrial eco-parks and, on the other hand, carbon tax policy for freight transport in Ecuador.

11:50
An Equilibrium Analysis of Risk-Hedging Strategies in Decentralized Electricity Markets
PRESENTER: Ilia Shilov

ABSTRACT. Currently, due to the large-scale integration of Distributed Energy Resources (DERs), electricity markets are starting to restructure - from centralized to decentralized local market designs. We investigate equilibrium problems arising in various decentralized designs of the electricity market involving risk-averse prosumers. The prosumers have the possibility to hedge their risks through financial contracts that they can trade with peers or purchase from an insurance company. We build several market designs of increasing complexity, from a one-stage market design with inter-agent financial contract trading to a Stackelberg game where an insurance company acts as a leader and prosumers are followers. We derive risk-hedging pricing scheme for each model and show that the Stackelberg game pessimistic formulation might have no solution. We propose an equivalent reformulation as a parametrizated generalized Nash equilibrium problem, and characterize the set of equilibria.

We prove that the insurance company can design price incentives that guarantee the existence of a solution of the pessimistic formulation, which is epsilon close to the optimistic one. We then derive economic properties of the Stackelberg equilibria such as fairness, equity, and efficiency. We also quantify the impact of the insurance company incomplete information on the prosumers' risk-aversion levels on its cost and social cost. Finally, we evaluate numerically the proposed risk-hedging market models, using residential data provided by Pecan Street.

12:20
Parking duration optimization for electric vehicles in a loss queueing system
PRESENTER: Alix Dupont

ABSTRACT. With the rise of Electric Vehicles (EVs), the demand for parking spots equipped with plugging devices in the charging stations (CSs) is tremendously increasing. To face this demand, a charging point operator responsible for one CS deals with limited resources, both in terms of the number of parking spaces equipped with a Charging Point (CP) as well as the available power. This motivates us to study how to improve the quality of charging service at a given CS. Each CP within the CS can deliver a maximum power, independently of the time. Moreover, there is a total limited maximum power available at the CS level, also independently of the time. Therefore, at each instant, the current charging EVs have to share the total power available at the CS according to a modified processor sharing rule, i.e EVs charge at the maximum power per CP when the number of charging EVs (CPs in use) is sufficiently low, otherwise the total power is uniformly shared. In our model, the stochastic nature of arrivals and departures at the CS is modeled by a queueing system \cite{Zwart}. EV users arrive at the CS according to a Poisson process and with a random amount of energy needed to fully charge their battery. An EV can occupy a CP without consuming power: each EV has a random parking duration and leaves the parking spot only when its parking time expires. The total number of EVs and the number of charging EVs at the CS at each instant follows a two-dimensional Markov process. In our work, and based on a study of the Markov process \cite{Zwart}, a closed form approximation of the optimal average parking duration in terms of the expected energy received per EV is provided. The model is numerically simulated using a Python discrete-event simulation framework \textit{Simpy}. The theoretical approximation is then compared with the optimal average parking duration obtained from the simulations.

11:20-12:50 Session 12J: Invited Session: The ℓ0 pseudonorm -- Theory and Applications
Location: A1.140
11:20
Discrete Optimization Methods for Problems with the L0-Pseudonorm

ABSTRACT. In this talk, we propose a survey of advances made the past decade to address sparse problems regularized by an L0-pseudonorm. Such problems are NP-hard in the general case and they can be reformulated as MIPs. In particular, we focus on two different types of reformulations leveraging either SOS constraints or a perturbation framework. Based on these reformulations, we explain how to build-up tailored Branch-and-Bound algorithms. Finally, we present how to adapt acceleration methods originally designed for convex sparse problems to our non-convex framework.

11:50
Continuous nonconvex relaxations of the l0 pseudonorm

ABSTRACT. In this talk, we will explore the world of non-convex penalties for sparse regression. Among the numerous non-convex continuous alternatives to the l0 pseudonorm, we will discuss both statistically- and optimization-based properties that motivates the use of some of them​.

12:20
Hidden Convexity in the l0 Pseudonorm
PRESENTER: Michel De Lara

ABSTRACT. The so-called l0 pseudonorm counts the number of nonzero components of a vector.It is a standard notion in sparse optimization problems. However, as it is a discontinuous and nonconvex function, the l0 pseudonorm cannot be satisfactorily handled with the Fenchel conjugacy. In this talk, we review a series of recent results on a class of Capra (Constant Along Primal Rays) conjugacies that reveal hidden convexity in the l0 pseudonorm. First, we present the Euclidean Capra-conjugacy. We show that it is suitable to analyze the l0 pseudonorm, as this latter is "convex" in the sense of generalized convexity (equal to its biconjugate). We immediately derive a convex factorization property (the l0 pseudonorm coincides, on the unit sphere, with a convex lsc function) and variational formulations for the l0 pseudonorm. We present mathematical expressions of the Capra-subdifferential of the l0 pseudonorm, and graphical representations. In a second part, we provide different extensions. We introduce the class of Capra-conjugacies defined by means of norms--- especially strictly-orthant monotonic norms (including the Euclidean norm) ---or, more generally, of 1-homogeneous nonnegative functions. We show that such Capra-conjugacies are suitable to analyze, not only the l0pseudonorm, but provide convex lower bounds for 0-homogeneous functions. We will also point out how to tackle the rank matrix function. Finally, we discuss how the theory could open the way for possible algorithms in sparse optimization problems.

12:20-12:50 Session 13: Contributed session: Optimization of microgrids - Part I
Location: A1.122
12:20
Optimal Voltage References for DC Microgrids using an Economic MPC Approach
PRESENTER: Pol Jane Soneira

ABSTRACT. Modern power systems will be characterized by a low inertia and fast voltage dynamics due to the increase of sources interfaced with power electronics and the disconnection of large traditional thermal generators [1]. Power electronics are commonly equipped with fast controllers that are able to set a desired voltage within few seconds [2]. When many power electronic devices are present in a power system, the question on how to set the voltage level on each power electronic device arises. The voltage levels on different nodes in a power system determine the power flows over power lines and thus have a high influence on the losses in a power system.

In this paper, we present an approach using economic Model Predictive Control (MPC) (c.f. [3]) to compute optimal voltage references for the power electronic devices in order to minimize the losses in a DC Microgrid. In particular, we show that economic MPC outperforms the traditional control hierarchy for computing the optimal references, which comprises a steady state optimization and a tracking MPC [4].

12:50-14:10Lunch Break
14:10-15:00 Session 14: Plenary lecture
Location: Auditorium
14:10
Recent aspects of mean field control

ABSTRACT. Mean field control consists in the optimal control of a large population of devices: it is motivated for instance by the optimal control of a fleet of vehicles or a fleet of drones, or the optimal charging of batteries,… Very often the system is simplified into the optimal control of a partial differential equation of McKean-Vlasov type. In this talk I will present a series of works (obtained with S. Daudin, J. Jackson and P. Souganidis) in which we investigate quantitatively the validity of this approximation.

15:00-15:20Coffee Break
15:20-16:50 Session 15A: Invited session: Equilibrium problems and decomposition methods in energy
Location: Amphi II
15:20
A Reverse Stackelberg Game Model for Grid Usage Pricing with Local Energy Markets

ABSTRACT. In the context of the massive penetration of renewables and the shift of the energy system operation towards more consumer–centric approaches, local energy markets are seen as a promising solution for prosumers’ empowerment. Various local market designs have been proposed that often ignore the laws of physics ruling the underlying distribution network’s power flows. This may compromise the power system’s security or lead to operational inefficiencies. Therefore, including the distribution network in clearing the local market arises as a challenge. We propose using grid usage prices (GUPs) as an incentive mechanism to drive the system towards an economically and operationally efficient market equilibrium, subject to security constraints. Our approach requires expressing the incentive policies as affine functions of the prosumers’ active and reactive power outputs. This setting falls into the category of reverse Stackelberg games, where we look for the optimal policy in the space of affine functions [1]. This approach takes advantage of controllability guarantees for the problem’s unconstrained setting, which hopefully will enable the DSO to influence the output of the market towards an optimally determined target point. Market-related properties of the policy, such as economic efficiency, individual rationality, incentive compatibility, and fairness, will be rigorously studied. Two alternative solution approaches are proposed: The first is based on the Gauss–Seidel algorithm, which capitalizes on the hierarchical structure of the reverse Stackelberg game. Meanwhile, the second is based on the KKT reformulation of the problem and then the application of exact penalty theory to the resulting non–convex smooth problem. Finally, extensive computational experiments will be carried out on different IEEE test feeders to assess the performance of the proposed approach statistically.

References [1] N. Groot, B. De Schutter, and H. Hellendoorn, “Optimal Affine Leader Functions in Reverse Stackelberg Games: Existence Conditions and Characterization,” Journal of Optimization Theory and Applications, vol. 168, no. 1, pp. 348–374, 2016.

15:50
The Unit Commitment problem with integration of Local Energy Communities
PRESENTER: Cristian Aguayo

ABSTRACT. For the last five decades, the Unit Commitment (UC) problem is one of the tools that have been used to optimize energy generation. With the rise of renewable energy, collective organizations such as Local Energy Communities (LECs) play a key role in the transition to more sustainable energy generation. However, the production of energy at minimum cost is still a topic of interest.

In this work, the UC problem is extended to a LEC schema. For modeling purposes, LECs are considered as sets of units with power demands over a given horizon and the objective is to minimize the total operation costs. Additional features such as power sharing between communities, external power purchasing and power storage are also considered and evaluated using benchmark instances. This work is developed under the framework of the SEC-OREA project.

16:20
Balancing Efficiency and Privacy in a Decision-Dependent Network Game
PRESENTER: Raphaël Taisant

ABSTRACT. We consider a network game, where End Users (EUs) minimize their cost by computing their demand and generation while satisfying a set of local and coupling constraints. Their nominal demand constitutes sensitive information, that they might want to keep private. We prove that the network game admits a unique Variational Equilibrium, which depends on the private information of all the EUs. A data aggregator is introduced, which aims to learn the EUs’ private information. The EUs might have incentives to report biased and noisy readings to preserve their privacy, which creates shifts in their strategies. Relying on performative prediction, we define a decision-dependent game G^{stoch} to couple the network game with a data market. Two variants of the Repeated Stochastic Gradient Method (RSGM) are proposed to compute the Performatively Stable Equilibrium solution of G^{stoch}, that outperform RSGM with respect to efficiency gap minimization, privacy preservation, and convergence rates in numerical simulations.

15:20-16:50 Session 15B: Invited session: Decarbonization and energy management
Location: A1.116
15:20
A static game model for analyzing country carbon emissions

ABSTRACT. In this work, we study a game in which the players are the governments, the action of a player is the carbon emission level, and the utilities implement a tradeoff between the benefit for emitting which is the gross domestic product and the damage caused by climate changes. We discuss the choice of the benefit function, the damage function, and the choice of the geophysical model. Compared to existing works, our approach integrates the existence of a dynamics for the global earth temperature and CO2 concentrations and considers the damage functions obtained from the DICE model.

15:50
Regional decarbonization studies: a first step to assess how to share decarbonization effort at the scale of a country. The example of Centre-Val de Loire
PRESENTER: Olivier Beaude

ABSTRACT. Based on engineers quantified methodologies, regional decarbonization studies proposed at EDF R&D allows assessing how the activation of sectorial “levers” (e.g., building insulation in the building sector, or electric mobility in the transportation sector) can lead to decrease the regional CO2 emissions. The global methodology and used tools, as well as typical results – here in the case of Centre-Val de Loire region – will be presented. Also, it will be explained how such modelling exercise could be used to address the question of sharing decarbonization efforts at the scale of a country.

16:20
Carbon offsetting: a game perspective 

ABSTRACT. This presentation will describe a project on decarbonization funded by the IMPT. Carbon offsetting schemes are developing strongly and remain at the center of existing emission reduction frameworks for states or certain sectors. This project focuses on the conditions under which the environmental integrity of such schemes can be guaranteed. Through the progressive development of a formalized representation, mobilizing game theory and motivated by a multidisciplinary analysis of these schemes, it aims to refine our understanding of the conditions that can ensure the integrity of such schemes.

15:20-16:50 Session 15C: Invited session: Quantum Operations Research - Overview and latest advances
Location: Amphi I
15:20
Quantum Operations Research

ABSTRACT. Quantum optimization is a new and attractive area with potential significant implication in operation research. The basic concepts have been introduced early by Richard Feynman (Preskill, 2021) in the early 70s and the first algorithmic contributions have been introduced in the early 90 ': P. Shor published in 1996 a polynomial time algorithm for factorization and Grover the famous algorithm that gives a quadratic speedup for finding some market solutions into an unsorted finite set. In the last few years several quantum metaheuristics have been introduced coming from the quantum physic community, defining a family of quantum approximate algorithms that encompasses for example the sequel Adiabatic based Algorithms that provides an approximate solution of the Schrödinger equation. Minimization problems can now be investigated using quantum metaheuristics with the promise of a strongly effective approach avoiding trapping into local minima that standard local search and satisfiability problem can be now investigating taking advantages of superposition. The presentation will try to explain first where are the promises of quantum and why theoretical considerations and the capability in numerical experiment should push the researchers of the OR community to initiate researches in the topic. Second we will try to present the various initiatives of the community around quantum including sessions, conferences and experience sharing all around the community. Thirdly we will describe the French ecosystem around quantum initiatives and emphasis the scientific and financial support promoted by the various scientific organizations.

15:50
Efficient Quantum Encodings for Combinatorial Optimization Problems: The Sub-Graph Isomorphism Case

ABSTRACT. With the steady advances in quantum technology to develop reliable quantum computers and software interfaces, quantum computing is well on track to disrupt traditional workflows in solving hard problems. A natural question is then whether quantum-based approaches can help the resolution of combinatorial optimization problems, which are both widespread in real life and very hard to solve classically. Here, the answer is more nuanced.

A class of promising candidates to solve combinatorial optimization problems on near-term quantum computers is the class of variational algorithms, which alternate quantum evaluations and classical steps. Here, one big open question towards quantum advantage is how we encode the problem into the quantum hardware to be able to scale better (e.g., solve larger problems, solve the same problems faster) than on classical computers.

In this talk, I will present some work in tackling this open question by looking at a specific combinatorial problem. In particular, I will propose a novel variational method for solving the sub-graph isomorphism problem on a gate-based quantum computer. The method relies (1) on a new representation of the adjacency matrices of the underlying graphs, which requires a number of qubits that scales logarithmically with the number of vertices of the graphs; and (2) on a new Ansatz that can efficiently probe the permutation space. Simulations are then presented to showcase the approach on graphs up to 16 vertices, whereas, given the logarithmic scaling, the approach could be applied to realistic sub-graph isomorphism problem instances in the medium term.

16:20
Attaque par canaux cachés quantiques
PRESENTER: Eric Bourreau

ABSTRACT. Richard Feymann (prix nobel de physique) est à l’origine dans les années 80 du concept d’ordinateurs quantiques. Dans les années 90, des algorithmes quantiques furent inventés (la factorisation efficace de grands nombres par Shor94 ou la recherche efficace d’un élément dans une liste par Grover95). Récemment dans les 5 dernières années, de vraies machines quantiques ont été construites et permettent de valider les premiers résultats promis par la théorie. Nous proposons dans cette présentation de parler de chiffrements et d’attaques par canaux cachés pour effectuer le déchiffrement. En auscultant finement l’algorithme de AES-128, il est possible d’extraire la clé cachée après un certain nombre d’attaque sur des canaux auxiliaires (consommation énergétique, rayonnement électromagnétique). Pour nous aider dans cette tâche, nous exploiterons l’algorithme de Grover grâce à Qiskit, une librairie python permettant de programmer les machines quantiques d’IBM.

15:20-16:50 Session 15D: Contributed session: Learning and OR
Chair:
Location: A1.128
15:20
AutoML - Benefits, Reality, Future

ABSTRACT. With the increasing adoption of machine learning (ML) for many practical applications there is a need for techniques to make adoption more efficient. This is the gap that automated machine learning (AutoML), and more generally automated artificial intelligence (AutoAI), aims to fill. The key goal of AutoML is to automate (parts of) the machine learning pipeline. Some of the most prominent and well-developed components from the AutoML toolbox include meta-algorithms, such as, automated algorithm selection (AAS) and configuration (AAC). Both of these techniques have shown substantial benefits in many areas of computer science, including mixed-integer programming (MIP) and Boolean satisfiability (SAT).

One of the most highlighted benefits of AAS and AAC (and more generally AutoML and AutoAI) is the possible performance improvement that can be gained over hand-designed systems. Orthogonal to this, two other benefits can help democratise ML and AI. Automating ML (and other AI processes) reduces the required human effort and, by abstracting away the underlying ML components, decreases the required expertise to start using ML.

In practice, however, AutoML is not as widely adopted as one might expect based on the potential benefits. Adoption seems to be inhibited by issues such as the usability of AutoML frameworks, insufficient expertise in AutoML, and the required computational costs.

As a first step towards increasing the adoption of AutoML, we aim to improve the access to- and understandability of meta-algorithms, such as AAS and AAC. To this end, we introduce the Sparkle platform. Starting with computer scientist not actively involved in the development of meta-algorithms, Sparkle aims to make meta-algorithms accessible and understandable to progressively wider audiences. To achieve this, Sparkle implements a command line interface of self-explanatory commands to simplify correct use of AAS and AAC, and produces detailed reports including results, the experimental setup, and references. Beyond the scope of Sparkle, many other AutoML techniques support different parts of the ML pipeline. To truly democratise ML, these too, have to be advanced to make them broadly accessible and understandable.

15:50
Improving subtour constraints generation in Branch-and-Cut algorithms for TSP with Machine Learning

ABSTRACT. Branch-and-Cut (B&C) is a widely-used algorithm for solving integer programming (IP) problems. In recent years, applying Machine Learning (ML) to improve fundamental decision problems of B&C algorithms is an active research domain. While much of ML research focuses on variable selection [1], node selection [2], and cut selection [3], less attention has been paid to the question of how to design a cutting plane generation strategy in B&C algorithms. This question is crucial since generating cutting planes can be a computational burden when the instance’s size increases. In this work, we focus on improving subtour elimination constraints (SEC) generation in B&C algorithms for Traveling Salesman Problem (TSP) by ML. SEC is a well-known constraint used to exactly solve TSP. In the IP formulations for TSP, SEC is usually dynamically generated as cutting planes in the course of B&C algorithms because of its exponential cardinality. Our approach is to take advantage of Machine Learning to handle two questions before launching the separation process to find violated SEC cuts on a node of the B&C search tree: 1) Does there exist violated SEC cuts? 2) If yes, is it worth generating them? For the former, we use a Graph Neural Network to detect the existence of violated SEC cuts from fractional solutions. For the latter, we formulate the problem of sequentially deciding to generate SEC as a Markov decision problem and train a Reinforcement Learning agent to tackle it. Not only can our method leverage the geometric structure of optimal solutions but it also offers the flexibility of the instance’s size. Numerical results show that our proposed approach can significantly accelerate the performance of B&C algorithms for TSP with SEC.

16:20
Structured learning for vehicle routing problems
PRESENTER: Léo Baty

ABSTRACT. Vehicle routing and vehicle scheduling problems are extensively studied in the literature, and extremely efficient algorithms are now available to tackle them. Both exact and heuristic approaches exist to solve these pure vehicle routing and scheduling problems. For example, the Vehicle Scheduling Problem (VSP) can be exactly solved by a flow linear program, and the Capacitated Vehicle Routing Problem (CVRP) can be exactly solved using Branch-and-Price, or heuristically on larger instances using the state-of-the-art Hybrid Genetic Search (HGS).

However, algorithms for variations of these pure problems are often not as efficient, and have difficulty to scale on large instances. An example of this setting is the Stochastic Vehicle Scheduling Problem (StoVSP), a variation of the VSP where travel and task times are randomly perturbed after the vehicle routes are chosen, the objective beeing to minimize the average total delay of the tasks. StoVSP can be solved with column generation on instances with up to 25 tasks, but then fails to scale and generalize to larger instance sizes. Another example is the Dynamic Capacitated Vehicle Routing Problem with Time Windows (Dynamic-CVRPTW), a multi-stage variation of the CVRP, for which the HGS does not apply anymore.

Using pure vehicle routing and scheduling problem algorithms as Combinatorial Optimization (CO) layers in Machine Learning (ML) pipelines can be a way to generate good algorithms for their variations. We present such hybrid pipelines for the two variations presented above, implemented by leveraging InferOpt.jl, our open source Julia package gathering several state-of-the-art methods for integrating CO algorithms in ML pipelines. For the StoVSP, we learn a pipeline containing a VSP flow algorithm, and are able to generalize and obtain solutions with small gaps on instances with up to 1000 tasks. For the Dynamic-CVRPTW, we learn a pipeline containing a modified HGS as a CO layer, and use it at each stage to retrieve a good policy.

15:20-16:50 Session 15E: Contributed session: Optimization of microgrids - Part II
Location: A1.122
15:20
Energy management in a hydrogen-based microgrid through constrained optimization
PRESENTER: Ionela Prodan

ABSTRACT. Systems for generating, storing and transporting energy in the form of hydrogen, are a reliable and non-polluting (green) alternative to classical energy systems. To produce hydrogen in renewable energy systems, electrolysers are used to split water molecules into hydrogen and oxygen with the first being stored for further use. Due to intermittency in generation and load it is important to manage efficiently the workload of the electrolysers as they have significant inertia (i.e., long start time and limited storage capacity). We propose an energy management scheme where, as a first step, the various components of the system (sources, storage, loads, and converters) are characterized and modelled in a tractable way, using state of the art tools (Dymola, subsequent FMU export for further use in Matlab/Python). We develop a reliable optimization-based control framework capable of handling the power flow within the hydrogen-based energy system under different scenarios. Simulations validate the proposed approach.

15:50
Surrogate Component Approach for Self-Consumption Management

ABSTRACT. Multi-level decisional models usually involve several players, tied together by some hierarchical or collaborative links. They aim at providing scenarii which would be the best in case all the players accept a common rule (centralized paradigm), or (collaborative paradigm) at searching for a compromise. Standard approaches involve decomposition schemes, hierarchical (Benders, Stackelberg,…) or transversal (Lagrangean), but a trend, boosted by the rise of machine learning technology, is to replace some levels by surrogate constraints or estimators. We apply it to the joint management of local photovoltaic energy production and its consumption by a fleet of electric vehicles.

15:20-16:50 Session 15F: Invited session: New developments in optimal control theory, part II
Location: Auditorium
15:20
State constraints in stochastic optimization

ABSTRACT. In this talk, we present necessary and sufficient optimality conditions for convex stochastic optimization problems subject to almost sure equality and conical constraints. We refine classical results by Rockafellar and Wets from two-stage stochastic optimization to include states belonging to the Bochner space of essentially bounded random variables with images in a reflexive and separable Banach space. Under certain conditions, the optimality conditions given are necessary and sufficient. Lagrange multipliers exhibit different regularity depending on whether or not the assumption of relatively complete recourse is satisfied. We propose a Moreau--Yosida regularization for such problems and show consistency of the optimality conditions for the regularization problem as the regularization parameter is taken to infinity. Algorithmic approaches using stochastic approximation are discussed as well as possible applications in PDE-constrained optimization under uncertainty.

15:50
A turnpike property for an optimal control problem with chance constraints
PRESENTER: René Henrion

ABSTRACT. The turnpike phenomenon is well-known in mathematical economics or in optimal control with PDE constraints. It roughly states that for a large time interval, close to its middle, the dynamic optimal state/control is close to the static optimal state/control (obtained when setting time-derivatives equal to zero and leaving initial and terminal conditions free in the optimization problem). In this talk we are going to consider a simple time-discrete linear optimal control problem but with the peculiarity of adding a joint probabilistic (or chance) constraint to the system. The model may be understood in the context of reservoir managment (water, gas) with uncertain inflow and controlled release. More precisely, an initial filling level of the reservoir has to be steered in expectation to a given terminal level while staying all the time with high probability in a desired region. The objective function consists of a term of tracking type for the expected values and a control cost. We provide sufficient conditions that imply the optimal expected trajectories of the filling level to remain close to a trajectory that can be characterized as the solution of an optimal control problem without prescribed initial and terminal condition. In this way we contribute to the study of the turnpike phenomenon from a new perspective.

16:20
A Maximum Principle for some Chance-Constrained Control Problems
PRESENTER: Hasnaa Zidani

ABSTRACT. Many physical or engineering systems are usually described by complex models including inherent uncertainties related to the evolution of the system or to the environment in which this evolution takes place. This is the case for example in finance or in energy management where uncertainties about the price of commodities, demand or supply must be taken into account in the mathematical formulation. In this talk, we will consider a class of control problems with chance constraints.

The notion of constraints in probability (or chance constraints) constitutes a major axis that aims at analyzing and computing optimal solutions that are robust in the sense of probability with respect to the model's uncertainties. This notion is different from the "worst-case" approach by the fact that it aims to define a robust solution against the uncertainties for a reasonable level of probability, while the worst-case problem, which aims for a robust solution against all uncertainties, may admit no feasible solution (and even when an admissible strategy exists, it is generally too pessimistic).

In deterministic control problems, it is known that the optimality conditions can be described by the so-called Pontryagin maximum principle. We will show in this talk that a maximum principle can also be obtained for a large class of chance-constrained control problems governed by some differential equations.

15:20-16:50 Session 15G: Contributed session: Mean-field models
Location: A1.133
15:20
A mean field model for the interactions between firms on the markets of their inputs
PRESENTER: Quentin Petit

ABSTRACT. We consider an economy made of competing firms which are heterogeneous in their capital and use several inputs for producing goods. Their consumption policy is fixed rationally by maximizing a utility and their capital cannot fall below a given threshold (state constraint). We aim at modeling the interactions between firms on the markets of the different inputs on the long term. The stationary equilibria are described by a system of coupled non-linear differential equations: a Hamilton-Jacobi equation describing the optimal control problem of a single atomistic firm; a continuity equation describing the distribution of the individual state variable (the capital) in the population of firms; the equilibria on the markets of the production factors. We prove the existence of equilibria under suitable assumptions.

15:50
A mean field control approach for smart charging

ABSTRACT. The increase in the number electrical vehicles (EV for short) raises some challenges in the management of the equilibrium production/consumption of the electrical network. To avoid congestion effects, the charging of large fleets of EVs must be optimized. In this talk we present a mathematical model to manage the consumption of such a fleet, using a mean field limit approximation. The existence of a solution to the associated mean field control problem is shown, as well as optimality conditions thanks to the Fenchel-Rockafellar duality. Our first result is the characterization of the optimality solutions as a system of two coupled PDEs and the regularity of the optimal control. This system is composed of a continuity equation and of a Hamilton-Jacobi equation, similarly to systems obtained in Mean Field Games(MFG). In a second part, we compare the mean field control problem to an optimization problem with a finite number of EVs. We obtain the convergence of the value and of the solution of the finite population problem when the number of EVs tends to infinity, thanks to the regularity of the optimal controls obtained in the first part, and thanks to a superposition principle adapted to this context.

16:20
NOMA-based Power Control for Machine-Type Communications: A Mean Field Game Approach
PRESENTER: Amani Benamor

ABSTRACT. Attracted by the advantages of Non-Orthogonal Multiple Access (NOMA) in accommodating multiple users within the same resource, this paper investigates the power allocation problem for Machine Type Devices (MTDs) in a Hybrid NOMA system. Particularly, we consider a densely deployed network in which the devices are divided into orthogonal coalitions. Firstly, the power control problem is modeled as a differential game. Then, we formulate the proposed game as a Mean Field Game (MFG) in order to handle massive Internet of Things (IoT) access scenarios. Furthermore, we design an iterative algorithm that paves the way for distributed control in which the devices can appropriately regulate their transmit power in response to brief information received from the Base Station (BS). The analysis of the proposed approach is conducted through coupled equations, namely the Hamilton-Jacobi-Bellman (HJB) and the Fokker-Planck-Kolmogorov (FPK). Our simulation results prove the convergence of the proposed power control strategy and spotlight the robustness of our formulated MFG.

15:20-16:50 Session 15H: Contributed session: Optimization and statistics
Location: A1.134
15:20
How Principal Component Analysis can improve Bayesian Optimization at High Dimensionality

ABSTRACT. Bayesian optimization (BO) is not well suited for high-dimensional problems. Several recent works aim to solve this problem, however they all have some limitations, such as the assumption of intrinsic lower dimensionality, the underestimation of correlation effects between variables, and the dependence on prior knowledge about a particular use case.

In this presentation, I introduce two algorithms that address the problem of applying BO to high-dimensional search spaces: PCA-BO and KPCA-BO. They aim to improve the scalability of BO by hybridizing it with principal component analysis (PCA), i.e., they perform an adaptive linear (PCA-BO) or a non-linear (KPCA-BO) feature mapping to a low-dimensional space in which they run the standard BO algorithm. The performance of (K)PCA-BO is compared with BO, CMA-ES, and other BO-based algorithms in terms of empirical convergence rate on the multimodal problems of the COCO/BBOB benchmark suite. I show that while both PCA-BO and KPCA-BO tend to stagnate as the number of evaluations increases, they perform very well on small budgets. Given the important role of BO in many applications, e.g., automated machine learning and mechanical design, I hope to discuss ideas for further improvements with PGMODAYS participants.

This presentation is based on joint work with Kirill Antonov and Hao Wang from Leiden University, The Netherlands, Maria Laura Santoni, from University of Camerino, Italy, and Carola Doerr from Sorbonne University, France.

15:50
Constrained Smoothing and Out-of-range Prediction by Means of a Mathematical Optimization Approach
PRESENTER: Vanesa Guerrero

ABSTRACT. The complexity of the data generated at present in many diverse areas makes necessary the development of innovative methodologies which are able to incorporate human knowledge to, for instance, enhance interpretability and avoid misleading out-of-range predictions. In this work, we address the problem of estimating smooth functions in a regression setting, which are constrained to satisfy requirements about their sign and/or shape.

We assume that the smooth function to be estimated is defined through a reduced-rank basis (B-splines) and fitted via a penalized splines approach (P-splines [1]). When dealing with simple regression, necessary and sufficient conditions on the sign of the smooth curve are embedded into the fitting procedure as hard constraints, yielding a conic optimization model. This characterization can be adapted to require sign conditions on higher order derivatives of the curve, meaning that our methodology can deal with requirements concerning the monotonicity and the curvature of the curve. Our approach for multiple regression arises as a non-trivial extension of the one-dimensional shape constrained framework, imposing the constraints over a finite set of curves which belong to the hypersurface. Furthermore, previous results are generalized for the first time to out-of-sample prediction, either forward or backward.

We compare our approaches with other state-of-the-art algorithms on both synthetic and real-world data sets. Furthermore, an open source Python library is publicly available, cpsplines, which contains the implementations of all the methodologies proposed in this work [2].

[1] P. H. Eilers and B. D. Marx, Flexible smoothing with B-splines and penalties, Statistical Science, 11(2):89-121, 1996.

[2] M. Navarro-García, cpsplines, https://github.com/ManuelNavarroGarcia/cpsplines, 2022.

16:20
Subset Sampling for Low-Discrepancy Point Sets

ABSTRACT. Discrepancy measures are metrics designed to quantify how well a point set is spread in a given space. Among the different measures, the star discrepancy is the most important, measuring the largest deviation between the volume of any box [0,q), q smaller than 1, and the proportion of points that are inside this box.

Point sets of low discrepancy are used in many contexts, including Quasi-Monte Carlo integration, financial mathematics, computer vision and simulation-based optimization. Low-discrepancy constructions have been extensively studied asymptotically, such as Sobol' and Hammersley sequences, which are known to have low discrepancy values when the number of points tends to infinity [3]. However, in practice, applications requiring low-discrepancy point sets will only use a finite, much smaller number of points.

In this talk, we introduce the Star Discrepancy Subset Selection Problem [1], which consists in choosing a subset P_m from a set of n points, m smaller than n, such that the star discrepancy of P_m is minimized. The aim of this approach is to provide point sets better tailored to practical applications. Given that the complexity of calculating the star-discrepancy is W[1]-hard [2], it is not surprising that we are able to show that this problem is NP-hard. We provide two exact algorithms, respectively based on Mixed Integer Linear Programming and Branch-and-Bound, to tackle this problem. For dimensions two and three and n not too large, our algorithms provide point sets of much smaller star discrepancy than for point sets taken directly from usual low-discrepancy sequences. We also extend this approach to higher dimensions with heuristics, providing point sets of better discrepancy for all dimensions for which the star discrepancy can be computed, answering open questions on the inverse star discrepancy [4].

[1]:François Clément, Carola Doerr, Luis Paquete, Star discrepancy subset selection: Problem formulation and efficient approaches for low dimensions,Journal of Complexity, Volume 70, 2022 [2]:Panos Giannopoulos, Christian Knauer, Magnus Wahlström, Daniel Werner, Hardness of discrepancy computation and epsilon-net verification in high dimension, Journal of Complexity, Volume 28, Issue 2, 2012, Pages 162-176 [3]:Jiri Matousek, Geometric Discrepancy, 2nd ed., Springer, Berlin, 2009 [4]: Erich Novak, Henryk Wozniakowski, Tractability of Multivariate Problems, Volume 2, Eur. Math. Soc. Publ. House, 2010

15:20-16:20 Session 15I: Contributed session: Observation problems
Location: A1.139
15:20
Non-uniform Observability for Moving Horizon Estimation and stability with respect to additive perturbation
PRESENTER: Emilien Flayac

ABSTRACT. In tackling nonlinear dynamical estimation problems using the machinery of mathematical optimisation, a well-known computationally tractable method is to define a cost on past inputs and outputs on a moving horizon and to estimate the associated state trajectory by minimising that cost over state trajectories. The estimator is then built from the resulting optimal solution. This leads to Moving Horizon Estimation (MHE). In the classical litterature on robust stability of MHE methods, observability assumption are typically stated independently of any control input and a global solution of a possibly nonconvex program is assumed to be available. The former assumption is very strong and ignores necessary refinements coming from the notion of persistent inputs in nonlinear observability theory. The latter assumption is impractical as only local solutions can be usually obtained numerically. It allows one to weaken the notion of observability considered. Thus, this paper formalises the concepts of weakly and weakly regularly persistent input trajectory as well as their link to the Observability Grammian and the existence and local uniqueness of local solutions of MHE problems. Additionally, thanks to a new time-uniform Implicit Function Theorem, these notions are proved to imply the stability of MHE solutions with respect to small additive perturbation in the measurements, both uniformly and non-uniformly in time. Finally, examples of weakly regularly persistent input trajectories are given in the case of 2D bearing-only navigation.

15:50
A multi-objective exposure-oriented sensor location problem for border security in 3D
PRESENTER: Stephane Malindi

ABSTRACT. In a recent work [1], Lessin et al. proposed a bilevel algorithm to optimize the placement of heterogeneous sensors to maximize the minimum exposure of an intruder's penetration path through a defended region. In [2], Lessin et al. expanded on the previous problem to consider the relocation of sensors after an initial disruptive action of the intruder, in order to minimize the maximum sensor relocation time, and the total number of sensors requiring relocation. The defended region is modelized by a hexagonal mesh in 2 dimensions. In this talk, we expand on Lessin's work by taking into account the altitude of the terrain by checking for lines of sight (LoS) between the intruder and the sensors. This upgrade is very expensive computationally and proper care was necessary to keep the computation time under reasonable bounds. We also propose a variation of the initial formulation in order to minimize the number of sensors required to reach a desired exposure of the intruder. We present and discuss preliminary results showing the performance of these modifications.

References [1] Lessin, Aaron M and Lunday, Brian J and Hill, Raymond R, A bilevel exposure-oriented sensor location problem for border security,Computers & Operations Research, 2022. [2] Lessin, Aaron M and Lunday, Brian J and Hill, Raymond R, A multi-objective, bilevel sensor relocation problem for border security, IISE Transactions, 57(6):13091327, 2020.

15:20-16:20 Session 15J: Contributed session: Learning and classifying
Location: A1.140
15:20
Learning with sign-dependent combinatorial optimization layers: a multiplicative perturbation approach
PRESENTER: Louis Bouvier

ABSTRACT. Combinatorial optimization (CO) layers in machine learning (ML) pipelines are a powerful tool to tackle data-driven decision tasks. Nonetheless, the solution of a CO problem often behaves as a piecewise constant function of its objective parameters. Given that ML pipelines are typically trained using stochastic gradient descent, the absence of slope information is very detrimental. Berthet et al. propose a framework to address this difficulty. They introduce an additive perturbation to differentiate through argmax. Nonetheless, this perturbation does not prevent the sign of the cost vector from changing, which can be detrimental. For instance, Dijkstra’s algorithm only works with non-negative costs. In a recent work, we derive a multiplicative perturbation, which allows us to learn with sign-dependent combinatorial optimizers. We do so both in a learning by imitation and learning by experience settings. We emphasize the performance and utility of our method on the task of learning shortest paths on Warcraft terrain images, a benchmark problem introduced by Vlastelica et al.. To the best of our knowledge, it is the first time a learning by experience pipeline is used for this application.

15:50
Monotonic Ensemble Classifiers for Biomedical Applications
PRESENTER: Océane Fourquet

ABSTRACT. The advent and use of RNA sequencing has led to the significant development of computational methods for the diagnosis and classification of cancers and other serious diseases. One of the challenges is the interpretability of the models. Many common models perform well, but are difficult to analyze in terms of biological reality. Feature pairs monotonic models [1] can help to address this issue. Our approach originally consisted in testing all the pairs of genes and selecting the best ones to make an ensemble model [2]. We introduce a preselection scheme that drastically reduces the number of feature pairs that need to be considered, with provable guarantee to return the same result as the original approach run on the full set of feature pairs.

[1] Stout, Q.. (2013). Isotonic Regression via Partitioning. Algorithmica. 66. 10.1007/s00453-012-9628-4.

[2] Nikolayeva I, Bost P, Casademont I, Duong V, Koeth F, Prot M, Czerwinska U, Ly S, Bleakley K, Cantaert T, Dussart P, Buchy P, Simon-Lorià ̈re E, Sakuntabhai A, Schwikowski B. (2018) A Blood RNA Signature Detecting Severe Disease in Young Dengue Patients at Hospital Arrival. The Journal of infectious diseases vol. 217,11: 1690-1698. doi:10.1093/infdis/jiy086