PGMODAYS 2024: PGMODAYS 2024
PROGRAM FOR WEDNESDAY, NOVEMBER 20TH
Days:
previous day
all days

View: session overviewtalk overview

08:30-09:00Coffee Break
09:10-10:00 Session 10: Plenary III: Enrique Zuazua
09:10
Control and Machine Learning

ABSTRACT. Systems control, or cybernetics, a term first coined by Ampère and later popularized by Norbert Wiener, refers to the science of control and communication in animals and machines. The pursuit of this field dates back to antiquity, driven by the desire to create machines that autonomously perform human tasks, thereby enhancing freedom and efficiency.

The objectives of control systems closely parallel those of modern Artificial Intelligence (AI), illustrating both the profound unity within Mathematics and its extraordinary capacity to describe natural phenomena and drive technological innovation.

In this lecture, we will explore the connections between these mathematical disciplines and their broader implications. We will also discuss our recent work addressing two fundamental questions: Why does Machine Learning perform so effectively? And how can data-driven insights be integrated into the classical applied mathematics framework, particularly in the context of Partial Differential Equations (PDE) and numerical methods?

10:00-10:30Coffee Break
10:30-12:30 Session 11A: Black-box and Global optimization
10:30
Runtime Analysis of the (μ + 1) GA: Provable Speed-Ups from Strong Drift towards Diverse Populations
PRESENTER: Martin S. Krejca

ABSTRACT. Most evolutionary algorithms used in practice heavily employ crossover. In contrast, the rigorous understanding of how crossover is beneficial is largely lagging behind. In this work, we make a considerable step forward by analyzing the population dynamics of the (μ + 1) genetic algorithm when optimizing the Jump benchmark. We observe (and prove via mathematical means) that once the population contains two different individuals on the local optimum, the diversity in the population increases in expectation. From this drift towards more diverse states, we show that a diversity suitable for crossover to be effective is reached quickly and, more importantly, then persists for a time that is at least exponential in the population size μ. This drastically improves over the previously best known guarantee, which is only quadratic in μ.

Our new understanding of the population dynamics easily gives stronger performance guarantees. In particular, we derive that population sizes logarithmic in the problem size n already suffice to gain an Ω(n)-factor runtime improvement from crossover (previous works achieved comparable bounds only with μ = Θ(n) or via a non-standard mutation rate).

11:00
Illuminating the Diversity-Fitness Trade-Off in Black-Box Optimization

ABSTRACT. In real-world applications, users often favor structurally diverse design choices over one high-quality solution. It is hence important to consider more solutions that decision-makers can compare and further explore based on additional criteria. Alongside the existing approaches of evolutionary diversity optimization, quality diversity, and multimodal optimization, this paper presents a fresh perspective on this challenge by considering the problem of identifying a fixed number of solutions with a pairwise distance above a specified threshold while maximizing their average quality.

We obtain first insight into these objectives by performing a subset selection on the search trajectories of different well-established search heuristics, whether specifically designed with diversity in mind or not. We emphasize that the main goal of our work is not to present a new algorithm but to look at the problem in a more fundamental and theoretically tractable way by asking the question: What trade-off exists between the minimum distance within batches of solutions and the average quality of their fitness? These insights also provide us with a way of making general claims concerning the properties of optimization problems that shall be useful in turn for benchmarking algorithms of the approaches enumerated above.

A possibly surprising outcome of our empirical study is the observation that naive uniform random sampling establishes a very strong baseline for our problem, hardly ever outperformed by the search trajectories of the considered heuristics. We interpret these results as a motivation to develop algorithms tailored to produce diverse solutions of high average quality.

11:30
Difficulties of the NSGA-II with the Many-Objective LeadingOnes Problem

ABSTRACT. Recent studies have shown that the widely used NSGA-II struggles with many-objective optimization, particularly when faced with more than two objectives. While it performs well in bi-objective problems, its efficiency decreases as the number of objectives increases. These challenges were confirmed in recent runtime analyses on the m-OneMinMax problem, where NSGA-II frequently failed to cover parts of the Pareto front.

In this work, we consider the m-LeadingOnesTrailingZeros (mLOTZ) problem, a generalization of the classical bi-objective LOTZ problem. This case is especially interesting, as its failure further highlights the inefficacy of NSGA-II in many-objective scenarios. Unlike m-OneMinMax, not every solution in mLOTZ is Pareto optimal, adding complexity. Our analysis shows that NSGA-II often fails to cover a significant portion of the Pareto front when the initial population size is small.

More particularly, achieving full Pareto front coverage requires an exponential runtime. Our theoretical results provide strong evidence of this inefficiency, both in expectation and with high probability. These findings underscore the decline in NSGA-II's performance as the number of objectives increases.

12:00
Law-Smooth Update Scheme for the Cross-Entropy Optimization Algorithm and Application

ABSTRACT. It is well known that the Cross-Entropy (CE) algorithm, based on Gaussian distributions family, is significantly less efficient than successful methods such as CMAES or even particle-based approaches such as particle swarms. Nevertheless, the Gaussian-based CE approach implements similar ingredients as the CMAES approach, but it is significantly penalized by its law updating step after sample selection. Variants of the CE approach have been proposed, featuring a smoothing of the distribution parameters. We show why these approaches by smoothing law parameters may result in wrong convergence, and propose an approach based on smoothing laws. We implement this CE updating approach and show that it gets close to the performance of CMAES on an applicative example. Our application concerns the robust optimization of band selection for anomaly detection in multispectral remote sensing images. As it is important to take into account the variability in the scenes observed, as well as the diversity of objects likely to be encountered, we have drawn on robustness measures, such as quantile, to quantify the variations in anomaly detection criteria over the 100 hyperspectral images of the benchmark dataset. Finally, we also explore a generalization of this CE update to laws of the exponential family.

10:30-12:00 Session 11B: Large scale optimization
10:30
A regret minimization approach to fixed point iterations

ABSTRACT. We present a link between regret bounds and fixed point problems with nonexpansive maps. This allows the definition of many new fixed point iterations based on regret minimizing algorithms with corresponding convergence guarantees derived from the regret bounds. This approach recovers the famous Krasnoselskii-Mann iterations as corresponding to Online Gradient Descent. In particular, we transpose the celebrated AdaGrad algorithm to obtain a fixed point iteration with strong adaptive properties.

11:00
Defining Lyapunov functions as the solution of a performance estimation saddle point problem

ABSTRACT. We reinterpret quadratic Lyapunov functions as solutions to a performance estimation saddle point problem. This allows us to automatically detect the existence of such a Lyapunov function and thus numerically check that a given algorithm converges. The novelty of this work is that we show how to define the saddle point problem using the PEPit software and then solve it with DSP-CVXPY.

This combination gives us a very strong modeling power because defining new points and their relations across iterates is very easy in PEPit. We can without effort define auxiliary points used for the sole purpose of designing more complex Lyapunov functions, define complex functional classes or study complex algorithms.

In a first numerical experiment, we numerically show that when solving a convex-concave saddle point problem whose smoothed duality gap has the quadratic error bound property with the Chambolle-Pock algorithm, step sizes satisfying 1 < στ ||L||^2 < 3/4 may not lead to a faster rate of convergence that στ ||L||^2=1 even though they are larger.

Then, we conjecture and prove a convergence result for randomized coordinate descent which is slightly improved from the literature.

11:30
Nesterov momentum for convex functions with interpolation: is it faster than Stochastic gradient descent ?
PRESENTER: Julien Hermant

ABSTRACT. Considering convex functions, the deterministic Nesterov accelerated gradient algorithm (NAG) significantly improves over the gradient descent in term of convergence speed. However, the literature has been rather pessimistic on the possibility to keep this improvement when replacing the gradient by stochastic approximations, because of sensitivity of the algorithm to errors on the gradient computation. We study the possibility to achieve accelerated convergence of the Stochastic Nesterov Accelerated Gradient algorithm (SNAG) in the favorable setting of interpolation. We start presenting the non accelerated convergence results we can obtain considering convex and smooth functions. Then, specializing to the case of minimizing a sum of functions, we show that the average correlation between gradients plays a key role in order to achieve acceleration with SNAG. Thereby, we show that the correlation between gradients impacts on the benefit of increasing batch size on the convergence speed.

10:30-12:00 Session 11C: Semidefinite programming and applications
10:30
Compact Knapsack: a Semidefinite Approach

ABSTRACT. The min-knapsack with compactness constraint problem (min-KCP) is a variant of the classical knapsack problem that capture applications in statistics. Given items i ∈ [[n]] with cost ci and weight wi, the goal is to select objects S ⊆ [[n]] minimizing the cost of the selection, while ensuring that the total weight exceeds a given constant q, in such way that the selection is compact: for a given Δ ∈ N, the distance separating two consecutive selected objects i, j ∈ S must not exceed Δ. In this work, we propose a semidefinite formulation of this problem, and explain how to strengthen the basic semidefinite relaxation in order to get better bounds. Finally, we will show how the semidefinite model compares with the classical known cuts for the knapsack problem, such as cover inequalities.

11:00
Minimal sparsity for scalable moment-SOS relaxations of the AC-OPF problem
PRESENTER: Caio Luke

ABSTRACT. The AC-OPF (Alternative Current Optimal Power Flow) problem aims at minimizing the operating costs of a power grid under physical constraints on voltages and power injections. The problem is solved daily by electrical grid stakeholders, usually in very large scale. Its mathematical formulation results in a large nonconvex polynomial optimization problem. It is hard to solve optimally in general, but it can be tackled by a sequence of SDP (Semidefinite Programming) relaxations corresponding to the steps of the moment-SOS (Sums-Of-Squares) hierarchy. Unfortunately, the size of these SDPs grows drastically in the hierarchy, so that even secondorder relaxations exploiting the correlative sparsity pattern of AC-OPF are hardly numerically tractable for real-life instances - with thousands of power buses. Our contribution lies in a new sparsity framework, termed minimal sparsity, inspired by the specific structure of power flow equations. Despite its heuristic nature, numerical examples show that minimal sparsity allows the computation of highly accurate second-order moment-SOS relaxations of AC-OPF, while requiring far less computing time and memory resources than the standard correlative sparsity pattern. Thus, we manage to compute second-order relaxations on test cases with about 6000 power buses, which we believe to be unprecedented.

11:30
A regularized interior-point method for optimization problems with complementarity constraints

ABSTRACT. We present a regularized primal-dual interior-point method (IPM), based on an Augmented Lagrangian penalty. It is known that the regularization term stabilizes the IPM algorithm, allowing convergence on degenerate optimization problems. In this talk, we aim at solving nonlinear programs with complementarity constraints (MPCC). We revisit recent results that analyze the global convergence of the Augmented Lagrangian method for MPCC problems, and we detail how they apply to our regularized IPM algorithm. Furthermore, the regularized problem exhibits a specific algebraic structure we can leverage in the computation of the Newton step. As a result, we obtain a competitive algorithm that compares with a state-of-the-art solution method. We demonstrate the capability of the algorithm on large-scale security-constraints optimal power flow (SCOPF) problems, which write naturally with complementarity constraints.

10:30-12:30 Session 11D: Invited sessions: Mean-Field Games, Part I
10:30
Equilibrium in Functional Stochastic Games with Mean-Field Interaction

ABSTRACT. We consider a general class of finite-player stochastic games with mean-field interaction, in which the linear-quadratic cost functional includes linear operators acting on controls in L2. We propose a novel approach for deriving the Nash equilibrium of the game explicitly in terms of operator resolvents, by reducing the associated first order conditions to a system of stochastic Fredholm equations of the second kind and deriving their closed form solution. Furthermore, by proving stability results for the system of stochastic Fredholm equations we derive the convergence of the equilibrium of the N-player game to the corresponding mean-field limit. As a by-product we also derive an epsilon-Nash equilibrium, which is valuable in this setting as we show that the conditions for existence of an equilibrium in the mean-field limit are less restrictive than in the finite-player game. Finally we apply our general framework to solve various examples, such as stochastic Volterra linear-quadratic games, models of systemic risk and advertising with delay, and optimal liquidation games with transient price impact. Based on joint work with Eyal Neuman and Moritz Voss.

11:00
A Stationary Mean-Field Equilibrium Model of Irreversible Investment in a Two-Regime Economy
PRESENTER: René Aïd

ABSTRACT. We consider a mean-field model of firms competing à la Cournot on a commodity market, where the commodity price is given in terms of a power inverse demand function of the industry-aggregate production. Investment is irreversible and production capacity depreciates at a constant rate. Production is subject to Gaussian productivity shocks, while large non-anticipated macroeconomic events driven by a two-state continuous-time Markov chain can change the volatility of the shocks, as well as the price function. Firms wish to maximize expected discounted revenues of production, net of investment and operational costs. Investment decisions are based on the long-run stationary price of the commodity. We prove existence, uniqueness and characterization of the stationary mean-field equilibrium of the model. The equilibrium investment strategy is of barrier-type and it is triggered by a couple of endogenously determined investment thresholds, one per state of the economy. We provide a quasi-closed form expression of the stationary density of the state and we show that our model can produce Pareto distribution of firms' size. This is a feature that is consistent both with observations at the aggregate level of industries and at the level of a particular industry. We provide evidence that persistent periods of economic downturn increase market concentration. We demonstrate that firms with slowly depreciating production capacities are better off in a stable, average economy, whereas firms with quickly depreciating assets can benefit from sequences of boom and bust.

11:30
Mean Field Games in a Stackelberg problem with an informed major player

ABSTRACT. We investigate a stochastic differential game in which a major player has a private information (the knowledge of a random variable), which she discloses through her control to a population of small players playing in a Nash Mean Field Game equilibrium. The major player’s cost depends on the distribution of the population, while the cost of the population depends on the random variable known by the major player. We show that the game has a relaxed solution and that the optimal control of the major player is approximatively optimal in games with a large but finite number of small players.

10:30-12:30 Session 11E: Stochastic and Robust optimization
10:30
Duality in Convex Stochastic Optimization

ABSTRACT. We present a general formulation of convex stochastic optimization problems in ?nite discrete time that covers and extends classical problem formats from mathematical programming, optimal stopping, optimal control and ?nancial mathematics. We study these problems within the general functional analytic duality theory of convex optimization. This will yield a dual problem whose optimum value coincides with that of the primal problem and whose optimal solutions can be used to characterize those of the primal. We demonstrate the general results with applications to stochastic control and financial mathematics. The talk is based on a forthcoming book "Convex Stochastic Optimization".

11:00
Comparing Deterministic and Chance Constrained Models for Renewable-Powered Mini-Grids in Unreliable Grid Conditions

ABSTRACT. This talk addresses the operation of a renewable-powered mini-grid connected to an unreliable main grid in rural Africa, where energy access rates are low. The system faces four key uncertainties: solar power and load forecasting errors and frequency and outage duration from the main grid. These uncertainties challenge traditional power system operations. We explore three alternatives to the Joint Chance Constrained (JCC) programming approach: Individual Chance Constraint (ICC), Expected-Value Model (EVM), and a deterministic model that disregards both outages and forecasting uncertainties. The JCC model ensures a high probability of meeting local demand during outages by maintaining adequate reserves for Diesel generation and battery discharge, offering the most robust solution. In contrast, the ICC model provides less robust dispatch by ensuring reliability only at individual time steps, and the simpler EVM focuses on average values, further reducing robustness. A case study of a real mini-grid in Lake Victoria, Tanzania, demonstrates that while the JCC model increases costs slightly, it significantly improves dispatch planning for battery and Diesel reserves compared to the other models.

11:30
An enumerative formula for the spherical cap discrepancy
PRESENTER: Rene Henrion

ABSTRACT. Among many criteria for uniformity of point sets on the unit sphere, the {\it spherical cap discrepancy} plays a prominent role due to the possibility of estimating the integration error of functions defined on the sphere. This aspect is of much importance, for instance, when dealing with chance constraints in optimization methods and using the well-established method of {\it spherical-radial decomposition}. Unfortunately, contrary to other criteria, the spherical cap discrepancy is not defined by a closed formula but rather as a supremum of local discrepancies which are discontinuous functions of their parameters. Therefore, already the shere computation of this discrepancy is a challenge and usually approximations or relaxations are employed. In \cite{heitsch}, a formula has been established which reduces the computation of the discrepancy to the enumeration of finitely many expressions. This allows one, for instance, to get precise numerical values as long as the dimension of the sphere is not too large (e.g., 6) and the size of the point set is moderate (100 to 1000 depending on dimension). Not surprisingly, there is an issue of complexity involved. However, we see two major potentials of the mentioned formula: first, it may serve in moderate dimensions for calibration purposes, when approximating algorithms with nicer complexity have to be evaluated on the basis of their deviation from true values. Second, and more importantly, the reduction to finitely many expressions may be exploited in order to derive analytic al properties (continuity, Lipschitz continuity, Clarke subdifferential) of the spherical cap discrepancy as a function of the point set. This may be considered as a first step towards optimal quantization of the uniform distribution on the sphere. The talk illustrates the mentioned enumerative formula, provides some numerical results and addresses some structural results of the discrepancy of the discrepancy as a function of the point set.

12:00
Finite Adaptability in Robust Optimization: Asymptotic Optimality and Tractability
PRESENTER: Anton Medvedev

ABSTRACT. Two-stage robust optimization problems address decision-making under uncertainty, where the first stage decisions must be made before the uncertainty is revealed, while the second stage decisions adapt based on the outcome. A key method for solving such problems, called finite adaptability, limits the range of second-stage decisions to a finite number of possibilities. In this paper, we challenge a previously proposed condition for the asymptotic optimality of finite adaptability and provide a corrected condition. Furthermore, we present novel tractability results by showing that certain cases of finite adaptability can be solved in polynomial time, using techniques from discrete geometry.

10:30-12:30 Session 11F: Optimal control
10:30
Sensitivity Analysis and Optimal Control for the Chemical Master Equation
PRESENTER: Arthur Bottois

ABSTRACT. We consider a chemical system composed of some given species and some given reactions. Each reaction is characterized by its stoichiometric coefficients and its reaction rate. Such a chemical system may be described by a continuous-time Markov chain and the law of this process satisfies a Kolmogorov equation called the Chemical Master Equation (CME) in this context. The CME is a countable collection of ordinary differential equations. This equation is used in biology to model chemical reactions taking place inside cells, in order to take into account the randomness of these reactions, in a context where the number of molecules involved is too low to make a continuous approximation of the state space.

In the simple case of bounded reaction rates, the Cauchy-Lipschitz theorem ensures the existence and uniqueness of the solution, while the implicit function theorem allows to conduct a sensitivity analysis of this solution with respect to the data (initial condition, reaction rates, ...). However, practical applications in biology lead to consider reaction rates that are proportional to the number of reactants and therefore unbounded. In this framework, the uniqueness of the solution is no longer guaranteed. One may nevertheless use the appropriate (in a physical sense) notion of minimal solution.

Using stability estimates for the minimal solution, we establish la differentiability of this solution with respect to the data. We then give an expression for the directional derivative involving an adjoint equation, for which we also define a concept of minimal solution. This allows us to derive optimality conditions (in the form of a Pontryagin's principle) for a class of optimization problems for which we control the reaction rates.

11:00
Optimal Control for Linear Systems with L1-norm cost

ABSTRACT. We study optimal control problems for finite dimensional linear systems, where control parameters take values in the Euclidean unit ball and the cost is the L1-norm of the control function. We study both finite and infinite horizon problems. Main results concern the existence, uniqueness and structure of optimal solutions, as well as continuous dependence of the optimal cost on the data.

11:30
Mean field optimal control with piecewise deterministic Markov processes

ABSTRACT. The presentation is about the theoretical resolution and numerical approximation of a mean field optimization problem. More precisely, a mean field optimal control problem of piecewise deterministic Markov processes is formulated, modeling the optimal charging of a large fleet of electric vehicles. Optimality conditions are obtained through a linearization procedure and are studied via a system of coupled partial differential equations, similar to those encountered in mean field games. The problem is numerically solved using the General Frank-Wolfe algorithm, a variant of the conditional gradient algorithm. This algorithm is analyzed, and it is proved that it has linear convergence and satisfies the mesh-independence property: its rate and the underlying convergence constants are independent of the discretization parameters. Finally, this problem is motivated for its applications in smart charging and in particular for curtailment problems.

12:00
Stationary Regimes of Piecewise Linear Dynamical Systems with Priorities
PRESENTER: Pascal Capetillo

ABSTRACT. Dynamical systems governed by priority rules appear in the modeling of, for instance, emergency response organizations and road traffic. These can be modeled by piecewise-linear discrete time-delay systems that count the cumulative number of events up to a certain time. These systems are parametrised by the resource allocation, the incoming demand, and structural considerations such as routing policies and priority rules.

One question is whether these systems reach a steady state, where the number of events grows at a constant rate over time. This allows for analysis such as mapping out the relationship between the resource allocation and the throughput of the system in a congestion phase diagram.

While steady states have been observed in specific cases, proving their existence more generally has remained open. We show that stationary regimes do exist under conditions involving the linear parts of the dynamics. These conditions hold in particular when the spectra of these parts are included in the unit disk with 1 a semisimple eigenvalue (or not an eigenvalue). This extends a theorem of Kohlberg on the existence of invariant half-lines of nonexpansive piecewise-linear dynamical systems to a class of dynamics that are not necessarily nonexpansive. The proof relies on topological degree theory, and it builds on the idea of "Blackwell optimality" from the theory of Markov decision processes.

10:30-12:30 Session 11G: Invited session: Quantum algorithms: graphs, groups, and walks

Proposee par Simonetto et Bourreau

10:30
Towards Graph Encoding for Quantum Computing with Neutral Atoms

ABSTRACT. Programmable quantum systems using Rydberg atom arrays are well-suited for solving quantum algorithms efficiently. The maximum independent set (MIS), a key NP-hard problem, can be natively encoded on these systems but only on Unit Disk Graphs (UDG). Recognizing UDGs and finding valid vertex positions (UD embedding) are NP-hard, and ensuring these positions fit the physical grid introduces an additional challenge: grid embedding. This constraint can be addressed by adding gadgets and auxiliary qubits. A key goal is to minimize the number of qubits needed to encode graphs with arbitrary connectivity. Additionally, Rydberg atom arrays must place atoms on a grid, rather than freely in the plane. The main result presented by QuEra encodes all graphs but remains too costly.

We propose a different approach, the Local UD encoding. The idea is to use gadgets similar to those of QuEra but to transform only the minimal non-UD structures of the graph. Transforming local structures uses fewer auxiliary qubits but does not provide a grid embedding. We therefore propose to first use local transformation then find a UD embedding with heuristics and finally apply search algorithms to find an embedding that fit the grid of the machine. We also present a global approach to the problem. First, a preprocessing step is applied to reduce the size of the instances with recursive simplicial fixing, then the graph is partitioned to parallelize the quantum solving process. Finally we use the previously described Local UD encoding on each part of the graph.

11:00
Quantum Optimization with Permutation Group-Structrure Ansatze

ABSTRACT. We devise a novel quantum variational algorithm for the resolution of permutation problems. In particular, our focus is on quadratic binary optimization problems with specific constraints. The set of feasible solutions is constituted exclusively of vectors derived from a given binary vector wherein the coordinates have been permuted. Via a suitable amplitude encoding step, the algorithm uses only a number of qubits that scales logarithmically with the number of variables. By efficiently designing the parametrized permutation ansatz via group theory results inspired from those of Bataille, the number of spanned permutations scales exponentially with respect to the number of parameters in the variational ansatz. The latter is a free tuning knob and can be adjusted by the implementer. These features render the proposed algorithm suitable for implementation on a large scale.

This research benefited from the support of the FMJH Program PGMO, under project: P-2023-0009.

11:30
Hybrid Quantum Search on Complex Networks: Noisy Spatial Search with Quantum Stochastic Walks
PRESENTER: Ugo Nzongani

ABSTRACT. Continuous-time quantum walks are a model of transportation on complex networks modeled as graphs. Like their classical counterpart, they can be used to solve the spatial search problem, which corresponds to finding an element in a graph-modeled database. Spatial search on the complete graph is the analog equivalent of Grover's algorithm and its performance depends on the graph's topology and spectral properties. We use the Quantum Stochastic Walk framework to mix both quantum and classical random walks in order to design a hybrid algorithm that can perform quantum search, classical search or a linear combination of the two. Previous work has shown that mixing classical and quantum dynamics can improve transport efficiency for many graphs. In this work we investigate the effects of mixing coherent and incoherent dynamics for spatial search on different types of graphs, and show that under certain conditions the hybrid regime is more efficient than either quantum or classical search.

12:00
Moderate Exponential-time Quantum Dynamic Programming Across the Subsets for Scheduling Problems
PRESENTER: Eric Bourreau

ABSTRACT. Grover Search is currently one of the main quantum algorithms leading to hybrid quantum-classical methods that reduce the worst-case time complexity for some combinatorial optimization problems. Specifically, the combination of Quantum Minimum Finding (obtained from Grover Search) with dynamic programming has proved particularly efficient in improving the complexity of NP-hard problems currently solved by classical dynamic programming. For these problems, the classical dynamic programming complexity in O∗(c^n), where O∗ denotes that polynomial factors are ignored, can be reduced by a hybrid algorithm to O∗(cquant^n), with cquant < c. In this presentation, we provide a bounded-error hybrid algorithm that achieves such an improvement for a broad class of NP-hard single-machine scheduling problems for which we give a generic description. Moreover, we extend this algorithm to tackle the 3-machine flowshop problem. Our algorithm reduces the exponential-part complexity compared to the best-known classical algorithm, sometimes at the cost of an additional pseudo-polynomial factor

10:30-12:30 Session 11H: Invited session: Bi-level problems in energy management
10:30
An Introduction to Witsenhausen Model and Games in Product Form

ABSTRACT. This talk is the first one of a series of two (to be followed by Thomas Buchholtzer's talk).

Energy markets are becoming more complex, under the penetration of highly variable renewable energies (solar, wind) and of technology (telemeters). With uncertain load, producers expect consumers to be more flexible and to shift part of their demand when needed. As a correlate, the design of contracts between producers and consumers is made more complex because of the stochasticity induced by renewable energies and consumption, but also by the variety of producers and consumers profiles (more or less flexibility).

We propose to use the so-called Witsenhausen intrinsic model (WIM), also called W-model, to represent strategic interactions between leaders (producers) and followers (consumers). For this purpose, we present the WIM --- with agents, Nature, a product set, a product sigma-field, information of an agent as a subfield of the product sigma-field, strategies \cite{Witsenhausen:1971a,Witsenhausen:1975,Carpentier-Chancelier-Cohen-DeLara:2015} --- and its extension into Games in Product Form (GPF), also called W-games --- with players, payoff functions, beliefs, mixed strategies "à la Aumann" \cite{Heymann-DeLara-Chancelier:2022}.

This talk paves the way for the following one, which will present examples of bilevel programs between different actors of the new energy markets, such as producers, consumers, aggregators.

11:00
Witsenhausen Model for Leader-Follower Problems in Energy Management

ABSTRACT. With the transition from centralized, controllable energy sources to decentralized, renewable options, managing energy consumption has become more complex. The unpredictability of renewable sources like solar and wind complicates balancing supply and demand. As a result, demand response has emerged as a key aspect of energy management, where the consumer (follower) adjusts his electricity consumption based on the prices set by the supplier (leader).

After briefly recalling the Witsenhausen intrinsic model (W-model) in its general form, we adapt it to analyze this leader-follower dynamic, where the follower makes decisions based on the actions of the leader, in the context of demand response. First, we focus on Nature, which includes both players' private knowledge, such as the supplier’s production costs and the consumer’s unwillingness to shift his consumption, as well as exogenous factors like weather conditions that impact heating demand and renewable energy production.

Next, we introduce W-games, also called Games in Product Forms, derived from the W-model, where each player has personal data, that is an objective function and a belief about Nature. We then apply game theory concepts, examining Nash and Stackelberg equilibria. This analysis delves into the players' decision-making processes and links them to bilevel optimization, highlighting the often overlooked information structure. After that, we explore how the W-model framework can be adapted to address more complex problems, notably multi-leader-multi-follower cases. By reformulating such problems from the literature, we aim to offer a clearer and more systematic approach to demand response problems in energy management.

11:30
Rochet-Choné Model, Bi-Level Optimization, and Quantization for Electricity Pricing

ABSTRACT. Pricing strategies in electricity markets are crucial for optimizing supply and demand dynamics, ensuring that both producers and clients make effective decisions. These strategies help manage grid stability, encourage efficient energy use, and promote competition among suppliers, ultimately benefiting clients through more competitive prices and better options.

In our presentation, we model the pricing problem using a Principal-Agent framework under adverse selection: the Rochet-Choné model. This model addresses pricing in a monopolistic market under reservation constraints and captures the interaction between the provider and consumers. The goal is to determine tariffs that balance profitability with social welfare. To solve this problem, we discretize the client space and solve the corresponding bi-level problem using the value function approach.

Additionally, we introduce quantization methods aiming to clarify and improve the understanding of the overall supply offered by the electricity provider. By utilizing a menu of pricing offers (contracts) derived from the Rochet-Choné model, we demonstrate how to identify the subset of contracts that best represents the initial menu. On one hand, we present this as a mixed-integer bi-level optimization problem, and on the other hand, we introduce greedy heuristics to compute the reduced menu, including the use of the Newton polygon and Fenchel conjugacy. To evaluate the methods' performance and practicality, we conducted experiments on instances generated by real life data provided by EDF. Our findings show that reducing the contract menu size from 100 to 10 results in only a 4% decrease in the objective value, while reducing it to 20 contracts leads to just a 1% decrease in the objective value with our best method in dimensions 2 and 3.

12:00
How Difficult is it to Check if a Multilevel Optimization Problem is Unbounded?

ABSTRACT. Despite their widespread applications, bilevel problems are known to be challenging to solve in practice, as even in the linear case bilevel problems are strongly NP-hard. However, unboundedness in these problems is often overlooked, with research typically assuming boundedness of their feasible sets. This presentation addresses this gap by exploring unboundedness in linear bilevel optimization and its computational complexity. We introduce the notion of direction of unboundedness to linear bilevel programs, and show that the decision problem of knowing whether a linear bilevel program with linking upper-level constraints is unbounded is NP-complete. Additionally, we prove that determining unboundedness in a k-level problem is as difficult as the hardest problems found at the k−1 level of the polynomial hierarchy. Finally, we detail an algorithmic approach to detecting unboundedness in a bilevel setting.

10:30-12:30 Session 11I: Scheduling and logistics
10:30
Integrated crew management for rail freight
PRESENTER: Héloïse Gachet

ABSTRACT. The construction of yearly schedules for drivers is a problem met in multiple branches of transportation. Generally, the planning process is addressed sequentially: in a first step---the Crew Scheduling---, driving tasks (assumed to be the same every week) are combined together to form daily duties; in a second step---the Crew Rostering---, these duties are organised together in rosters, providing the yearly schedule. A common variation of the latter problem, the Cyclic Crew Rostering, is used for freight train drivers at the SNCF (the main French railway company), but also in other industries. For this problem, the drivers are first split into teams. Then the cyclic roster of each team is obtained by cyclically rolling out a weekly schedule of an individual driver to provide the yearly schedule. Such rosters are easy to memorize and ensure fairness and a similar level of proficiency among the drivers.

The decomposition of the planning process into Crew Scheduling and Crew Rostering is justified by the numerous operational rules on duties and rosters, as well as the combinatorial complexity of the two problems. However, this decomposition results in sub-optimal solutions. Lin and Tsai propose to solve the integrated version via column generation. The SNCF is faced with additional challenges making their approach irrelevant: some trains must be driven at night (due to priority of passenger over freight in France), leading to rosters with multiple duties on the same day, and the objective function is not linear in the duties.

We propose a column generation approach to the integrated Crew Scheduling and Cyclic Crew Rostering problems, in which columns are not rosters but ``long shifts,'' representing the working period between two days off, making the objective function linear. A non-trivial contribution is the modeling of the pricing sub-problem as a Resource Constrained Shortest Path Problem, within the setting of Parmentier. Our approach has already been able to provide a solution to an instance of 265 trains, with a cost 10% lower that the one achievable with the current sequential approach.

11:00
A heuristics for Pickup and Delivery Problem with Cooperative Robots
PRESENTER: Chi Thao Nguyen

ABSTRACT. We introduce a novel neighborhood structure and an efficient algorithm for constructing initial solutions to the Pickup and Delivery Problem with Cooperative Robots (PDP-CR), a complex optimization problem in automated warehouse. In PDP-CR, we have a fleet M of m identical robots and a set N of n tasks. Every task i is characterized by its pick-up location pi, destination di, process time ci, and number of robots required ri. A task i cannot start until ri robots arrive at its pickup point. This means early-arriving robots must wait for all required robots before starting the task from the pick-up point to the destination. All the robots begin their paths at a starting depot and finish their path at a returning depot. tij is the time for any robot to move from task i to task j on the graph, corresponding to the time a robot moves from di to pj . The objective of PDP-CR is to minimize the makespan, which is the time the last robot reaches the returning depot. The problem’s solutions are represented using a network flow graph, where each unit of flow corresponds to a robot. The proposed neighborhood and heuristics is created by "cut" the graph and redistributing the flows across the cut. With an enhanced version allowing for the insertion of tasks between the cut. This neighborhood is integrated into a tabu search framework and also used for constructing initial solutions to the PDP-CR. We evaluate the performance of the proposed heuristic against an existing Mixed Integer Linear Programming (MILP) model on both small and large datasets. Experimental results show that the heuristic reduces computation time while delivering high-quality solutions, proving its effectiveness and scalability for large-scale applications.

11:30
Scheduling Autonomous Buses for On-Demand Transportation with Crossing Constraints

ABSTRACT. In order to reduce air pollution and improve the quality of life of their residents, many cities are choosing to develop public transportation, offering an alternative to the private cars. This is the case in Nantes, where the SNCF is replacing disused rails with an autonomous bus lane, in order to set up an autonomous bus system. During off-peak periods, these buses would provide transportation on demand, in order to meet user needs as effectively as possible with a small fleet of vehicles. In this abstract, we propose a method for managing bus timetabling.

12:00
Online Inventory Problems: Theory and Practice
PRESENTER: Massil Hihat

ABSTRACT. This talk will present our work which addresses multi-product online inventory control. This problem consist of a manager that makes sequential replenishment decisions based on partial historical information in order to minimize its cumulative losses. The literature on this problem focused on the design of specialized algorithms that handle specific loss structures and dynamics while providing theoretical guarantees under an unrealistic i.i.d. demand assumption. Our motivation is to go beyond these models. Building on top of Online Convex Optimization (OCO), we start by providing a more general framework for inventory problems. Then, we propose MaxCOSD, an online algorithm that has provable optimal guarantees even for problems with non-i.i.d. demands and stateful dynamics, including for instance perishability. Finally, we argue for the necessity of non-degeneracy demand assumptions to allow learning.

10:30-12:30 Session 11J: Energy
10:30
Distributed Economic Dispatch in Power Networks Harnessing Data Center Flexibility

ABSTRACT. In this work, we focus on the interaction of Data Centers (DCs) with electric power networks in an economic dispatch problem, harnessing the ability of DCs to serve as flexible loads that can alter their power consumption to alleviate network congestion. Since DCs can transfer workload between each other, controlling their output via some demand response mechanism that respects power generation and network constraints, while also accounting for DC Quality of Service (QoS) can achieve system-wide benefits. From the DC perspective, we aim to explore the benefits of their incorporation in an economic dispatch problem, and demonstrate that they can achieve significant cost savings for the entire system (data and power networks), enabling the IT sector to contribute to societal sustainability efforts.

Our main contributions are as follows. First, we leverage results from queuing theory to model DCs and form QoS-based cost functions --- signifying how well a DC can carry out its workload given an amount of active servers, for which we provide convexity guarantees under certain conditions. Second, we integrate DCs in a centralized economic dispatch problem that determines, apart from power generation, DC workload shifting and server utilization, while respecting power network constraints. Third, we provide a tractable decentralized formulation of the economic dispatch problem employing Lagrangian decomposition and a primal-dual algorithm, which can cater for both the power network constraints, and, most importantly, the DC workload shifting, in a distributed manner that scales for the ``coupled'' data and power networks. Fourth, we present experimental results on a standard power network that provide useful insights on the system-wide benefits from harnessing DC flexibility in the economic dispatch problem, emphasizing on the trade-offs between the DC locations, their efficiencies, and QoS costs.

11:00
A Domain-Specific Modelling Language for a Multi-Energy Planning Tool
PRESENTER: Thomas Bittar

ABSTRACT. Energy systems evolve rapidly to meet the ambitions of carbon neutrality, with an increasing share of renewable energy sources and increasing couplings between sectors. Capacity expansion planning and long-term adequacy studies are essential to assess the security of supply, the techno-economic performance of energy systems, and build economically consistent prospective mixes.

In an expansion planning problem, there are two levels of uncertainties: at the planning level (structural hypothesis on the mix, political decisions, ...) and at the operational level (demand, failures of generation assets, ...).

At the planning level, structural evolutions of the mix can be represented with a scenario tree. At the operational level, we may consider different modellings to capture the appropriate dynamics and uncertainties. Most long-term power system modelling tools use parallel Monte-Carlo simulations to take the uncertainty into account. This implies that for every given scenario, the realization of uncertainties such as the load or the renewable generation is known over all the optimization period. The operation of the system is then computed with perfect foresight. When there is little uncertainty on the generation or the load, this computation is relevant. However, as the variability in the system increases, this modelling may become inadequate. For instance, the decision to turn on a long-to-start thermal unit may be taken before knowing the exact value of wind production few hours later. Then the level of thermal generation can be adjusted once the wind production is known.

To improve the efficiency in conducting studies, modelling tools must allow to easily change the core structure of the model depending on the users' need: we want to easily configure a scenario tree or change the information structure of the operational level problem without coding in the tool.

To do so, we derive a Domain-Specific Modelling Language (DSML) from a generic formulation of long-term adequacy and planning problem. This DSML provides flexibility in the definition of the time structure, the assets dynamics and the information structure. We can also easily define scenario trees and couplings between their nodes. We present use cases to show how the DSML improves the efficiency in conducting studies by sharing modelling efforts.

11:30
The Share of Decarbonization Efforts between French Regions : A Bilevel Optimization Model
PRESENTER: Prunelle Vogler

ABSTRACT. In 2019, the French Conseil Economique et Social (Economic and Social Council) raised the issue of the lack of coordination between regions, which risks (i) missing national emission targets and (ii) preventing the specific features of each region from being exploited. Most of the works on the share of decarbonization efforts operates at the global scale, or European scale. To tackle smaller geographical areas like French regions, under the authorities of a national level, we introduce a multi-followers Stackelberg game between a national decarbonization planner and regional planners. The equilibrium is reached by solving a bilevel optimization problem. The national planner wants to reduce the cost of decarbonization as much as possible, while meeting the European targets of a 50% reduction of greenhouse gas (GHG) emissions by 2030 and carbon neutrality by 2050. Firstly, she imposes emission reductions on regional planners and invests in subsidies or infrastructure to facilitate their decarbonization. Secondly, regional planners wish to maximize the utility of their region's inhabitants, while respecting the emissions limits set by the national planner. Our focus is on reducing energy-related GHG emissions in passenger transport (TP) and residential buildings (BR). Solving this problem requires a reformulation of the followers' (regional planners') problems using KKT conditions, and a linearization of the bilinear objective terms.

In this presentation, we will describe this Stackelberg game, and analyze its results on our case study: the sharing of BR and TP decarbonization efforts between French regions.

12:00
Eddy current imaging of defects inside the steam generator with an inverse method
PRESENTER: Morgane Mathevet

ABSTRACT. For the non-destructive evaluation of steam generators in nuclear power plants, eddy current probes are used to inspect the metallic tubes within the generator. The inverse problem of interest involves reconstructing the shape of defects, such as clogging deposits, wear, or cracks, on these tubes from eddy current measurements.

First, we must solve the direct problem, which numerically simulates the eddy current data. This is achieved by solving the eddy current PDE model and computing the impedance measurements from the solution of the PDE.

Second, we solve a minimization problem aimed at optimizing the defect shape by minimizing the residual between the numerical simulation and real measurement data.

To achieve this, we employ a gradient descent algorithm coupled with the level-set method to describe the shape. Initially, we focus on deposits that exhibit changes in both conductivity and magnetic permeability, which introduces instabilities at the deposit boundary. This contrasts with the previous work of Zixian Jiang in 2014 and Hugo Girardon in 2020, who dealt with deposits characterized solely by changes in conductivity. We then extend our approach to crack reconstruction.

We explore two different methods for evaluating the derivative with respect to shape:

A surface representation of the shape gradient, the classical approach in shape optimization, based on the Hadamard structure theorem. This representation is supported on the defect boundary and is sensitive only to shape changes in the normal direction to the boundary.

And a domain representation of the shape gradient, which is more robust to instabilities at the defect boundary and allows for tangential changes along the defect boundary. This approach is especially important for crack reconstruction, where sensitivity to crack size is critical, as the tips of the crack can extend or contract.

12:30-14:10Lunch Break
14:10-15:00 Session 12: Plenary IV: Paola Goatin
14:10
Traffic Flow Models for Current and Future Mobility Challenges

ABSTRACT. The mobility paradigms could undergo a significant transformation in the near future, as new technologies enable extended data collection and Vehicle-to-Vehicle (V2V) or Vehicle-to- Infrastructure (V2I) communication. This will offer novel means to control and optimize traffic flow. Within this context, mathematical models play an important role, allowing for the de- sign and evaluation of new management approaches. In this talk, I will present applications to road traffic regulation using Connected Automated Vehicles (CAVs) [1] or dynamic routing [2]. Our results are based on the development of specific macroscopic models accounting for the interacting dynamics of the different classes of users. Numerical experiments show that control- ling a small fraction of users is, in general, sufficient to consistently improve the global system performance.

References [1] C. Daini, M. L. Delle Monache, P. Goatin and A. Ferrara, Traffic control via fleets of connected and automated vehicles, submitted. hal:04366870. [2] A. Festa, P. Goatin and F. Vicini, Navigation system based routing strategies in traffic flows on networks, J. Optim. Theory Appl., 198 (2023), 930-957.

15:00-15:30Coffee Break
15:30-17:00 Session 13A: Invited Session: Algorithmic optimization and online decision-making
15:30
Safe treatment of infeasible convex optimization problems via the augmented Lagrangian
PRESENTER: Roland Andrews

ABSTRACT. This work focuses on constrained convex optimisation problems. Various popular algorithmic strategies were designed to tackle such problems by solving sequences of unconstrained optimisation problems. That is, for instance, the case of interior point methods and augmented Lagrangian algorithms. The augmented Lagrangian in particular is practically efficient and has strong theoretical guarantees under minimal assumptions when the feasible set associated to the constraints is non-empty.

The setting where the constrained optimization problem is infeasible, however, only recently started to attract attention. This situation is particularly relevant for optimal control (e.g., Model Predictive Control) and machine learning (e.g., neural networks using convex optimization layers) where infeasible problems are commonly encountered. Hence, algorithms under consideration should remain stable also when dealing with such problems.

Recent literature studies this problem under various sets of assumption. In this work, we analyze the general case with convexity as the sole assumption. Our approach is based on the classical relationship between the augmented Lagrangian algorithm and the dual proximal point algorithm.

16:00
Is maze-solving parallelizable?

ABSTRACT. Can you find a short path, without a map? Is maze-solving parallelizable? These questions can be cast formally using the toolbox of competitive analysis, a toolkit that was developed since the 1990s to study online algorithms and decision-making. In this talk, I will present these problems, that are known as « layered graph traversal » (introduced by the literature on online algorithms) and « collective tree exploration » (introduced by the literature on distributed algorithms). We will focus on the connexions between these problems and use them as a case study of challenges in decision-making such as: collective strategies, advice complexity and switching costs.

This talk is based on joint works with Laurent Massoulié and Laurent Viennot at Inria and Xingjian Bai and Christian Coester at Oxford University.

16:30
Optimization-based efficient algorithms for robot dynamics simulation and control

ABSTRACT. We present our recent progress on efficient algorithms for simulating rigid-body dynamics. With simulation being the primary computational bottleneck while solving robot optimal control problems and training reinforcement learning-based controllers, speeding it up is a key problem.

Equality-constrained dynamics problems are equivalently formulated as an equivalent quadratic programming (QP) problems using Gauss' principle of least constraint. We solve these QP problems for kinematic trees using the augmented Lagrangian method and by exploiting the tree structure to derive the constrained articulated body algorithm (constrainedABA). ConstrainedABA makes no constraint independence assumptions, has linear computational complexity in the robot's degrees-of-freedom and the constraint dimension, and leads to a speed-up of over two times for high dimensional robots like humanoids compared to the cubic-complexity algorithms prevalent in existing simulators.

The talk will also present latest research on extending constrainedABA to kinematic graphs with kinematic closed loops, and on solving the nonlinear complementarity problems associated with frictional contact simulation efficiently using constrainedABA as the innner solver for both an ADMM-based and an interior-point based approach.

15:30-17:00 Session 13B: Smart grid -- optimizing flexibilities
15:30
Smart Charging and Optimization of Personalized Flexibility Services for Electric Vehicles’s Users
PRESENTER: Rita Safi

ABSTRACT. The increasing number of EVs may cause an overload on the electrical infrastructure. To prevent this, we encourage EVs to be flexible by offering them a price menu design that includes multiple options with different durations required to meet the energy demands of the EVs and their corresponding charging prices. This allows EVs to choose the option that best suits their needs. Note that every EV will have a customized menu from the charging station. Also, our goal is to maximize the profit of the station by increasing its revenue from charging the vehicles and minimizing the cost associated with this charging. To achieve this, we study the optimal way to share power between the EVs over our time horizon. On the other hand, EV will choose an option from the price menu such that it maximizes its utility. Our problem is a bi-level optimization problem that we can transform into a single-level optimization problem. We study and implement three methods that allow us to make this transformation: the classical KKT transformation, the optimal value transformation, and a reformulation from an article. We then compare the results of these methods.

16:00
Multi-Agent Contextual Combinatorial Multi-Armed Bandits with Linear Structured Super Arm: application to energy management optimization in Smart Grids

ABSTRACT. The integration of electric vehicles (EVs) and renewable energy sources (RES) into future electrical grids presents both significant challenges and opportunities for energy management. While RES production is an incentive for increased demand at specific times of the day (e.g., at noon for photovoltaic production), the distribution system operator (DSO) should also prevent grid congestion. The optimal strategy for integrating each EV with regard to the DSO constraints in a manner that maximizes the local use of RES is an NP-hard problem, as it requires the resolution of a mixed-integer linear programming problem (MILP) [1]. Moreover, due to weather and human behaviors, this optimization should be done under uncertainty. In particular, learning algorithms from the family of Bandit algorithms have been studied in [2], [3] to handle uncertainty. In light of the decentralized algorithms developed in [2], we propose a Multi-Agent Contextual Combinatorial Multi-Armed Bandits (MA-CC-MAB) approach with linearly structured super arms. By leveraging contextual information such as time-varying renewable energy generation, grid load conditions and day of the week, each agent will selfishly and dynamically schedule its charging intervals while considering grid constraints.

16:30
Towards Effective Datasets for Training Data-driven Models for Smart Grid Security Assessment
PRESENTER: Juan J Cuenca

ABSTRACT. Due to the decentralisation of energy resources and the electrification of the heat and transport sectors, it is becoming increasingly more difficult for network operators to supervise and manage smart grids. Research points at the development of decentralised energy management tools (e.g., based on artificial intelligence) to develop strategies for the individual decision-making of numerous agents (e.g., smart electric vehicle charging), that aggregated respect the physical constraints of the grid. To train these tools it is therefore necessary to foresee problematic operational states (e.g., under/overvoltage, exceeding the rated powers of line/transformer) which could damage equipment, trigger protections, and cause service disruptions. While this can be done through power flow simulations (i.e., solving a system of non-linear physical equations with a numerical solver), their high computational cost hinders computing speed. An alternative is using a traditional power system simulator to generate labelled datasets in an off-line stage, and then training a data-driven model to act as a surrogate for a fraction of the computational cost of the conventional tools. In this manner, the computing time associated with the training of the AI-based energy management tools is greatly reduced. We compared three state of the art data-driven approaches to identify if an operational point (i.e., electricity demand at each consumption point) is classified as "safe" or "unsafe" (i.e., absence/presence of congestions). To this end, we propose and evaluate novel data generation strategies, and compare them to the standard random generation approach in the literature, highlighting the inadequacy of the latter when applied to low voltage networks. This is tested on the IEEE European Low-Voltage Test Feeder, with reports on computational times for training and inference, as well as significant improvements in classification performance using alternative data generation strategies. This work is conducted as part of the ANR "EDEN4SG" project under grant agreement ANR-22-CE05-0023.

15:30-17:00 Session 13C: Pricing of energy
15:30
Dynamic Parking Pricing for Electric Vehicles at public Charging Stations
PRESENTER: Alix Dupont

ABSTRACT. The growing adoption of Electric Vehicles (EVs) can lead to congestion at the Charging Stations (CSs), particularly in terms of access to the charging points. Such congestion can be reduced through the implementation of parking pricing strategies.

In this work, the parking fee is applied only to EV users exceeding a predefined parking time limit, and increases linearly with their parking durations. Both the time limit and the rate at which the parking fee increases are variables set by the CS operator. The occupancy level of the CS is modeled as an time-inhomogeneous Markov chain, where the transition rates -- i.e., the frequency of EV users joining and leaving the CS, are controlled through the parking pricing strategy. The design of an optimal parking pricing strategy is examined in terms of the profit generated by the CS operator from charging the EVs.

This analysis considers various dynamic pricing approaches, based on (i) the instantaneous occupancy level, (ii) the time of day, or (iii) a combination of both. The numerical results indicate that time-based dynamic pricing yields better performance.

16:00
Electricity dispatch and pricing using agent decision rules
PRESENTER: Andy Philpott

ABSTRACT. Models for computing economic dispatch and prices in wholesale electricity market pools are typically deterministic multiperiod mathematical programs that are solved in a rolling horizon fashion. In convex settings with perfect foresight these optimization problems yield dispatch outcomes and locational marginal prices that solve a competitive equilibrium problem. Growing investment in renewable energy has increased the uncertainty in net demand to be met by dispatchable generation. To accommodate this, stochastic programming models formulated using scenario trees have been proposed for economic dispatch. The use of these models in practice is challenging for several reasons. Market participants need to agree on the scenarios used for uncertain parameters in the model, and realizations of these parameters will be different from those in any modelled scenario. When updated in a rolling horizon fashion, stochastic models can misprice the option value of energy storage and the value of changing current dispatch to meet future ramping constraints. This leads to uplift payments that compensate participants for the fact that the system operator forecasts the future incorrectly. We present a class of new economic dispatch models that attempt to overcome these drawbacks, based on agent decision rules (ADRs). Forecasting future outcomes or scenarios passes from the system operator to market participants who implicitly make state-dependent offers of energy through these decision rules. We show how storage and ramping can be priced correctly in convex markets and illustrate the advantages of the new model through simple examples.

16:30
Optimal Operation and Valuation of Electricity Storages
PRESENTER: Teemu Pennanen

ABSTRACT. The increasing proportion of renewable energy increases the uncertainties and seasonalities in the supply and thus, the price of electricity. This creates strong incentives to store energy. This paper applies computational techniques of convex stochastic optimization to optimal operation and valuation of electricity storages. Our valuations are based on indifference pricing which finds optimal hedging strategies and calibrates to the user’s initial position, market views and risk preferences. Indifference pricing has been extensively studied in pricing financial products in incomplete markets (see e.g. the collection [Car09] and the references therein). Applications of indifference pricing to energy have been studied, e.g., in [PTW09] and [CCGV17]. We solve the optimal storage management problem numerically using SDDP which allows for state constraints which are inherent in physical storages. Our approach is applicable to various specifications of storages. We illustrate the approach numerically by studying the sensitivities of indifference prices with respect to storage capacity, charging speed, risk aversion and volatility of spot prices.

15:30-17:30 Session 13D: Quantum algorithms for combinatorial optimization
15:30
Comparative study of quantum methods in the resolution of track findings instances
PRESENTER: Lacomme Philippe

ABSTRACT. Track finding can be considered as a complex optimization problem initially introduced in particle physics involving the reconstruction of particle trajectories. A track is typically composed of several consecutive segments (track segments) that resembles a smooth curve without bifurcations. In this paper various modeling approaches are explored in order to assess both their impact and their effectiveness in solving them using quantum and classical methods. We present implementations of three classical models using CPLEX, two quantum models running on actual D-Wave quantum computers, and one quantum model on a D-Wave simulator. To facilitate a fair comparative study and encourage future research in this area, we introduce a new set of benchmark instances, categorized into small, medium, and large scales. Our evaluation of these methods on the benchmark instances indicates that D-Wave methods offer an excellent balance between computation time and result quality, outperforming CPLEX in numerous cases.

16:00
Comparative Analysis of Quantum Annealing and Quantum-Classical Hybrid Solver on Cut Problems
PRESENTER: Ali Abbassi

ABSTRACT. This work addresses a cybersecurity optimization problem, reduced to various cut problems, which were tackled using Quantum Annealing (QA) from D-Wave. Classical approaches such as Mixed Integer Linear Programming (MILP) were not considered, as they outperform quantum methods for the considered instances. Instead, the focus is on analyzing the scalability and performance of different Quadratic Unconstrained Binary Optimization (QUBO) formulations, and how their embedding on a quantum device impacts solution quality.

The study compares the performance of Simulated Annealing, Quantum Annealing alone, and the Quantum-Classical Hybrid Solver, which leverages QPU and a metaheuristic, on these problems, examining the effect of QUBO structure, embedding strategies, and solver selection. Special attention is given to how the interaction between variables influences the physical qubit requirements and how different workflows within the Hybrid Solver perform as the problem size increases.

Our findings suggest that the formulation of the QUBO, including the balance between linear and quadratic terms, plays a crucial role in achieving optimal results. While Quantum Annealing alone shows promise, the Hybrid Solver consistently performs better for larger instances, providing a more scalable and efficient solution for industrial cut problems.

16:30
Application of Quantum computing to settle the large number of financial transactions

ABSTRACT. A securities transaction is an exchange of securities versus payment in currency between two financial actors. The transaction is settled, if, inter alia, there are sufficient amounts of resources on the corresponding cash and securities accounts. Each night more than four hundred thousand transactions over a billion euro are settled in batches. Banque de France is in charge of Mathe- matical Optimization Module (MOM), which is a component of a large European transactions settlement platform named T2S. Given a batch and the limited time, MOM looks for a subset of transactions to settle whose size is as large as possible respecting all business constraints and taking advantage of some financial features reducing the number of failed transactions. It is composed of preparatory, construction and improvement solution phases. Since the batch size is too large, no algorithm can find neither optimal nor approximate solution for the entire batch on a conventional computer. Thus, MOM’s preparatory phase splits a batch of transactions into the subsets to address smaller problems. The current splitting procedure has long runtime and its quality affects the final number of settled transactions. In this work, we present an approach that would improve the splitting results and the MOM’s outcome. It is based on a Quadratic Unconstrained Binary Optimization problem solved by a Quantum Approximate Optimization Algorithm and implemented on the available quantum devices via Qiskit platform.

15:30-17:30 Session 13E: Invited session: Polynomials, Optimization, and Sampling
15:30
Efficient sampling of constraint spaces in practice

ABSTRACT. Sampling from a given probability distribution constrained by a convex space is a fundamental problem that arises across various fields, from optimization to quantitative finance and systems biology. These problems are critical for modeling uncertainty, making decisions under constraints, and understanding complex systems. However, designing efficient sampling algorithms in high-dimensional spaces remains a significant challenge.

In this talk, we will introduce the relevant theoretical background, define the main problems, and delve into the core challenges associated with sampling in constrained spaces. We will explore how these challenges manifest in practical applications, briefly touching on real world examples from biology and finance. Furthermore, we will outline the current state-of-the-art methods available for some important constraint models, focusing on their computational efficiency, scalability, and accuracy.

From a practical viewpoint, we will demonstrate the implementation of these methods in open-source software, providing hands-on insights into their implementation and applicability. The discussed software is publicly available at \url{https://github.com/GeomScale}, enabling researchers and practitioners to integrate these advanced sampling techniques into their workflows.

16:00
Minimal Euclidean Distance degree of Segre-Veronese varieties

ABSTRACT. The Euclidean Distance degree of a real algebraic variety X counts the number of (complex) critical points of the distance function from a generic point to X. It is a measure of the algebraic complexity of the corresponding distance minimization problem. By its definition, ED degree depends on the inner product on the ambient space. A natural problem is to understand for which inner product the ED degree achieves its minimal value. I will discuss this question for the Segre-Veronese variety, for which the above optimization problem is equivalent to the well-known rank-1 approximation problem to a partially symmetric tensor. I will talk about a conjecture asserting that the Frobenius inner product realizes the minimum of the ED degree and discuss results around it.

16:30
Rational SOS certificates of any polynomial over its zero-dimensional gradient ideal
PRESENTER: Chaoping Zhu

ABSTRACT. Given a multivariate polynomial with rational coefficients, we present rational certificates of non-negativity based on sum of squares decomposition, under the assumption that its gradient ideal is zero dimensional.This is a joint work with Matías Bender and Elias Tsigaridas.

Session : Polynomials, optimization, and sampling

17:00
The Stochastic Central Path : Sampling the Optimal Solution of a Convex Program with Random Parameters

ABSTRACT. We present a novel method for sampling the optimal solution of unconstrained, strictly convex optimization problems with random parameters. The motivating application are methods in two-stage stochastic programming, which often rely on computing (the expectation of) optimal dual variables for linear programs with random coefficients.

Conventional methods typically proceed by generating samples of the parameters and then solving the optimization problem for each individual realization. By contrast, we directly generate samples of the optimal solution by simulating an appropriately chosen diffusion process, whose steady state has the same probability distribution as the optimal solution.

We present preliminary numerical results for the case where the random optimization problem is a log-barrier regularization of a linear program, and the diffusion is simulated via Unadjusted Langevin Monte Carlo (ULMC).

15:30-17:00 Session 13F: Invited sessions: Mean-Field Games, Part II
15:30
A mean-field-game approach to overfishing
PRESENTER: Ziad Kobeissi

ABSTRACT. In this talk, we investigate an instance of the tragedy of the commons in spatially distributed harvesting games. The model we choose is that of a fishes’ population that is governed by a parabolic bistable equation and that fishermen harvest. We assume that, when no fisherman is present, the fishes’ population is invading (mathematically, there is an invading travelling front). Is it possible that fishermen, when acting selfishly, each in his or her own best interest, might lead to a reversal of the travelling wave and, consequently, to an extinction of the global fishes' population? To answer this question, we model the behaviour of individual fishermen using a Mean Field Game approach, and we show that the answer is yes. We then show that, at least in some cases, if the fishermen coordinated instead of acting selfishly, each of them could make more benefit, while still guaranteeing the survival of the fishes' population. Our study is illustrated by several numerical simulations

16:00
On common noise in finite state space mean field games
PRESENTER: Charles Meynard

ABSTRACT. In this talk, we present results of well-posedness for a master equation on finite state space involving terms modelling common noise. In this setting, the solution of the master equation depends on an additional variable modelling the value of a stochastic process impacting all players. This is a natural modelling of common noise, which includes situations in which the dynamic of each player is impacted by a common environment variable.

We present sufficient conditions for the existence and uniqueness of a Lipschitz continuous solution to the master equation on any time interval. Under more restrictive structural assumptions, we also treat cases in which the dynamics of this stochastic process depend on the state of the game

16:30
Strategic geometric graphs through mean field games

ABSTRACT. We exploit the structure of geometric graphs on Riemannian manifolds to analyze strategic dynamic graphs at the limit, when the number of nodes tends to infinity. This framework allows to preserve intrinsic geometrical information about the limiting graph structure, such as the Ollivier curvature. After introducing the setting, we derive a mean field game system, which models a strategic equilibrium between the nodes. It has the usual structure with the distinction of being set on a manifold. Finally, we establish existence and uniqueness of solutions to the system when the Hamiltonian is quadratic for a class of non-necessarily compact Riemannian manifolds, referred to as manifolds of bounded geometry.

15:30-17:00 Session 13G: Large scale stochastic dynamic programming
15:30
State Abstraction Discovery for Infinite Horizon Dynamic Programming
PRESENTER: Orso Forghieri

ABSTRACT. A Markov Decision Process (MDP) is a framework used to solve stochastic dynamic control problems, where the environment evolves based on the actions of an agent. The high dimensionality of MDPs can be addressed using decomposition techniques such as state and action space abstractions in Hierarchical Reinforcement Learning. While state abstraction methods have been explored over the past few decades, explicit methods for constructing useful model abstractions are rarely provided.

We present a practical criterion for building meaningful abstractions. To achieve this, we estimate the error introduced by the abstraction and evaluate the abstract MDP using a bound on the optimality gap. Finally, we propose a method to solve MDPs with controlled error by progressively refining the abstractions, allowing for the discovery of explicit state abstractions. We prove the convergence of this method under both the discounted and total reward criteria. Recent implementations show that this approach reduces solving time relative to traditional methods and other aggregation algorithms across various models. It also reveals the underlying dynamics of MDPs without making assumptions about the problem's structure.

16:00
Relative Value Iteration for Infinite-Horizon SDDP: Application to Hydroelectric Problem
PRESENTER: Francis Durand

ABSTRACT. We study dynamic programming (DP) algorithms for long-term hydroelectric water management, focusing on the infinite-horizon extension of Stochastic Dual Dynamic Programming (SDDP). While SDDP is effective in decision-making with continuous state and control spaces, extending it to infinite-horizon problems presents challenges, especially as the discount factor approaches 1.

To address slow convergence of cuts in this setting, we propose a variant based on relative value iteration (RVI), where cuts are additively shifted to ensure more relevance and faster activation. Numerical results show the interest of this approach.

16:30
Dynamic two-stage programming for the stochastic lot-sizing problem with inventory bounds and lost sales
PRESENTER: Victor Spitzer

ABSTRACT. The presented work introduces a novel approach designed to solve the multi-stage stochastic lot-sizing problem with inventory bounds and lost sales. This optimization problem aims at minimizing the production costs of an item supplied to a customer according to an uncertain demand, while managing a storage of limited capacity and penalties in case of stock-out.

It is motivated by use-cases in the green hydrogen production industry, where electricity from both uncertain renewable assets and a costly connection to the distribution grid is used to produce a low-carbon hydrogen. A multi-stage stochastic problem has to be studied in order to correctly represent the successive production decisions and uncertainty realizations throughout the planning horizon. The missed opportunity to produce from the uncertain renewable sources can alternatively be represented as a loss induced by a default in demand supply. Hence, it is similar to the stochastic lot-sizing problem with inventory bounds and lost sales.

We propose a novel approach relying on dynamic programming to decompose this lot-sizing model into smaller two-stage stochastic sub-problems, each corresponding to a single stage of the problem. This method distinguishes itself from more standard dynamic programming scheme by its ability to manage complex production models over a large planning horizon. Numerical results demonstrate the benefits of this approach, both for theoretical models and for use-cases in the green hydrogen production industry.

15:30-17:00 Session 13H: Invited Session : New algorithms for old problems
15:30
Explicit Convergence Rate of the Proximal Point Algorithm under R-Continuity
PRESENTER: Michel Thera

ABSTRACT. The paper provides a thorough comparison between R-continuity and other fundamental tools in optimization such as metric regularity, metric subregularity and calmness. We show that R-continuity has some advantages in the convergence rate analysis of algorithms solving optimization problems. We also present some properties of R-continuity and study the explicit convergence rate of the Proximal Point Algorithm $(\mathbf{P}\mathbf{P}\mathbf{A})$ under the R-continuity.

16:00
A Fresh Look at Algorithms for Solving Smooth Multiobjective Optimization Problems
PRESENTER: Sorin-Mihai Grad

ABSTRACT. We propose a new approach for constructing practical algorithms for solving smooth multiobjective optimization problems based on determining decreasing directions via suitable linear programming problems, exploiting a characterization for descent directions provided by Illés and Lovics. The presented iterative method is specialized for unconstrained, and linearly constrained multiobjective optimization problems. In all cases, the objective function values sequence is decreasing with respect to the corresponding nonnegative orthant, and every accumulation point of the sequence generated by the algorithm is a substationary point to the considered multiobjective optimization problem, becoming, under convexity assumptions, a weakly Pareto efficient solution. Different to similar algorithms from the literature, the ones proposed in this work involve decreasing directions that are easily computable in polynomial time.

16:30
A Newton-type Method for Constrained Optimization

ABSTRACT. This work provides a Newton-type method for non-smooth equality and inequality con- strained optimization. Feasibility problems can be seen as underdetermined equations, and thus Newton’s method can be easily adapted, with the caveat that the output of each iteration is an affine subspace of Rn . Optimization with affine equality constraints simplifies to unconstrained optimization in a lower dimensional subspace. The key insight of our algorithm consists in employing an operator splitting approach in order to combine the two methods, yielding a su- perlinearly convergent method for general equality constrained optimization. In order to analyze this algorithm, the correct notion of smoothness is that of Newton Differentiability. This no- tion is based on the work of Qi [1], and is a strictly weaker notion than the standard Fréchet differentiability. The benefit of our algorithm, as opposed to Riemannian Newton or projected Newton type methods is the fact that we do not require any hard to compute geometric informa- tion, such as arbitrary projectors onto a set or Riemannian exponential and logarithms. When comparing with SQP, our method can similarly be formulated as a quadratic program, with the advantage stemming from the compatibility of the solution to this program, by solving a linear systems, circumventing the expensive quadratic solver. We provide numerical results showcas- ing our algorithm compared to a suit of classic solvers implemented in the Julia programming language. Finally, for an inequality constrained optimization problem, we adapt our algorithm by replacing equalities with inequalities, but this raises difficulties by converting linear systems into non-linear systems. In order to solve this problems, we use ideas similar to Riemannian ge- ometry to define a notion of pseudo-invertability, sufficient for showing superlinear convergence of our algorithm.

15:30-17:00 Session 13I: Learning and optimization
Chair:
15:30
Machine Learning-Driven Inflow Forecasting for Optimizing Hydropower Maintenance Scheduling
PRESENTER: Maissa Daadaa

ABSTRACT. The aim of this study is to evaluate the impact of different Hydropower Production Function (HPF) formulations on maintenance scheduling under uncertainty. Three approximation methods are used: a convex hull approximation, piecewise linear approximation, and a polynomial function. Accurate scenario generation plays a crucial role in comparing these methods, as inflow uncertainty is one of the most important challenges for stochastic hydropower optimization. Therefore, in this study, three machine learning models : Long Short Term Memory (LSTM), Kernel Ridge Regression (KRR), and XGBoost are employed to forecast future inflows based on historical data. The predicted inflows from each model are then compared using different strategies to identify which model provides the most accurate results. The results from each model will be used then to generate scenarios for evaluating and comparing the three approximation methods, and to assess their impact on the solution.

16:00
Generalized Kantorovich-Rubinstein Duality and Applications to Dictionary Learning
PRESENTER: Joel Garde

ABSTRACT. We study dictionary learning as a sparse optimization over the space of measures, generalizing the classical lasso problem. We offer a new perspective on this problem using a generalized Kantorovich-Rubinstein duality. Measures are put in duality with c-concave functions using Minkowski gauges. The introduction of transportation costs allows to study the “sliding” steps — iterations where the support of the measure is moved, as Wasserstein gradients. Our developments are done when the cost is the Bregman Divergence of an entropy h. In this setting the transport map is the gradient of a convex function, generalizing Brenier’s theorem, and the c-concave functions are the smooth functions relative to h.