PGMODAYS 2025: PGMODAYS 2025
PROGRAM FOR WEDNESDAY, NOVEMBER 19TH
Days:
previous day
all days

View: session overviewtalk overview

08:30-09:10Coffee Break
10:00-10:30Coffee Break
10:30-12:00 Session 11A: Invited session: Neural Network approximations and related techniques for high-dimensional PDEs and Control Problems, part II
10:30
Learning Value Functions with High-order Information

ABSTRACT. This talk is devoted to recent results on the approximation of high-dimensional HJB PDEs with supervised learning. We will discuss supervised learning of high-dimensional value functions using high-order data, a.k.a. Sobolev training. We expand our previous work on gradient-augmented learning for value functions to incorporate second-order (Hessian) information. Unlike gradient information that is available directly from PMP solvers, the Hessian values is recovered by solving a non-autonomous Riccati equation depending on the optimal trajectory. Hence, it is crucial to understand the dimension/nonlinearity regime where the cost of obtaining second-order information is justified in term of data efficiency.

11:00
Finite-Difference Least Square Method for Solving Hamilton-Jacobi Equations Using Neural Networks

ABSTRACT. In recent years, advancements in deep learning and new optimisation algorithms have motivated the use of artificial neural networks to solve non-linear problems in high-dimensional setups. One of the crucial steps during the implementation of any deep learning method is the choice of the loss functional, which is used to train the neural network parameters, typically through a gradient-based method. In this talk, I will consider the approximation of the viscosity solution for Hamilton-Jacobi equations by means of an artificial neural network. I will discuss the choice of the loss functional, which should be such that any critical point approximates the viscosity solution. I will present some recent results concerning loss functionals involving a consistent and monotone numerical Hamiltonian of Lax-Friedrichs type. Using the numerical diffusion built in the numerical Hamiltonian, we are able to prove that any critical point solves the associated finite-difference problem and, therefore, approximates the viscosity solution. I will also present a method in which the numerical diffusion of the numerical scheme is decreased during the training, allowing for approximations with less numerical diffusion.

11:30
Separable Representations of Optimal Value Functions via Neural Networks
PRESENTER: Mario Sperl

ABSTRACT. In this talk, we discuss how separable structures provide an effective approach to approximating high-dimensional optimal value functions. The key structural property that enables such approximations is a decaying sensitivity between subsystems, meaning that the influence of one state variable on another diminishes with their graph-based spatial distance. This property makes it possible to construct separable approximations of the optimal value function as a sum of localized contributions. We further demonstrate that these separable approximations admit efficient neural network representations, where the number of parameters grows only polynomially with the state space dimension. These results highlight how structural properties of the problem can be leveraged to obtain scalable neural network representations, thereby mitigating the curse of dimensionality in optimal control.

10:30-12:00 Session 11B: Invited session: Quantum computing and combinatorial optimization
10:30
Sampled-Based Guided Quantum Walk: Non-variational quantum algorithm for combinatorial optimization
PRESENTER: Ugo Nzongani

ABSTRACT. We introduce SamBa-GQW, a novel quantum algorithm for solving binary combinatorial optimization problems of arbitrary degree with no use of any classical optimizer. The algorithm is based on a continuous-time quantum walk on the solution space represented as a graph. The walker explores the solution space to find its way to vertices that minimize the cost function of the optimization problem. The key novelty of our algorithm is an offline classical sampling protocol that gives information about the spectrum of the problem Hamiltonian. Then, the extracted information is used to guide the walker to high quality solutions via a quantum walk with a time-dependent hopping rate. We investigate the performance of SamBa-GQW on several quadratic problems, namely MaxCut, maximum independent set, portfolio optimization, and higher-order polynomial problems such as LABS, MAX-k-SAT and a quartic reformulation of the travelling salesperson problem. We empirically demonstrate that SamBa-GQW finds high quality approximate solutions on problems up to a size of n=20 qubits by only sampling n^2 states among 2^n possible decisions. SamBa-GQW compares very well also to other guided quantum walks and QAOA.

11:00
Guaranteed performance in short-time quantum annealing
PRESENTER: Arthur Braida

ABSTRACT. SESSION "INVITÉS"

Quantum annealing (QA) is an analogue paradigm for quantum optimization that remains a leading candidate for demonstrating quantum advantage on combinatorial problems. Unlike gate-based methods such as the Quantum Approximate Optimization Algorithm (QAOA), QA evolves a system continuously under a time-dependent Hamiltonian, a feature that makes rigorous analysis challenging and has limited provable performance guarantees to date. Numerical comparisons between QA and QAOA are informative but constrained by the classical cost of simulating Schrodinger dynamics and by current annealer sizes, so analytical tools are needed to assess worst-case behavior. \\ \\ Here we introduce a parametrized (tunable) variant of QA and use it to perform a precise 1-local analysis on regular graphs. The analysis rests on a tight Lieb-Robinson bound derivation tailored to QA dynamics by exploiting the commutativity graph construction, together with a simple prefactor on the initial (driver) Hamiltonian that improves analytical tightness without altering the algorithmic procedure. Using MaxCut on cubic graphs as a benchmark, we show that a linear-schedule, constant-time QA, when analyzed as a 1-local algorithm, achieves a worst-case approximation ratio exceeding 0.7020. This guarantee surpasses one-layer QAOA (=0.6925) and the best known classical 1-local algorithm (=0.6980). \\ \\ Our results demonstrate that refined locality bounds and modest, analytically motivated Hamiltonian adjustments can materially tighten provable performance guarantees for QA, narrowing the gap between numerical evidence and rigorous theory.

11:30
Average case analysis of Branch and Bound algorithms: Is there potential for quantum advantage?

ABSTRACT. Combinatorial optimization problems are ubiquitous across machine learning, finance, and operational research. Although worst-case analysis classifies these NP-hard problems as exponentially complex, Branch-and-Bound (BnB) algorithms, particularly those leveraging relaxation heuristics, often achieve practical efficiency. This work addresses the critical gap in understanding the average-case complexity of BnB in this setting. We develop an analytical framework based on the statistical properties of problem costs and their relaxations. Focusing on Quadratic Unconstrained Binary Optimization (QUBO) as a prototypical NP-hard problem for our analysis, we formally prove that its subproblem's costs are asymptotically normally distributed under specific spectral conditions on the objective function matrix.

Leveraging this, our analysis demonstrates a critical transition in the expected size of the BnB tree: it transitions from exponential growth to a more constrained, near-polynomial growth regime. This transition is accurately predicted by a critical pruning depth which we derive analytically. Numerical experiments validate the normality premise across various QUBO instance types, including those derived from real-world NP-complete problems like Max-Cut, TSP, and Portfolio Optimization. Our experimental analysis on an SDP-based BnB solver demonstrates that empirical performance significantly outperforms theoretical worst-case predictions, with the predicted peak complexity depth aligning with our model's predictions. Furthermore, by comparatively analyzing Knapsack, Vertex Cover, and Binary TSP, we show that our model accurately predicts the vast differences in their practical tractability, which are not captured by worst-case NP-completeness equivalence alone. 

These findings offer a novel, quantitative lens for understanding the performance of Branch-and-Bound algorithm, providing concrete guidance for developing improved classical, hybrid, and quantum optimization methods. This includes formalizing the strategic advantage of root node integrality gap strengthening and average integrality gap, and establishing new dimensions for robust benchmarking of optimization algorithms. Our work thus advances the understanding of what makes hard problems tractable.

10:30-12:00 Session 11C: Invited session: Discrete Optimization and Energy II
10:30
Enhancing the Economic Performance and Resilience of Telecommunications Networks through Integrated Multi-Battery System Solutions
PRESENTER: Hadhbi Youssouf

ABSTRACT. Today, telecom networks face increasing economic pressures due to rising energy costs and the demand for higher operational efficiency. Flexible energy storage systems, especially batteries, are essential for balancing supply and demand fluctuations, maintaining network stability during peak loads, and providing reliable backup power during outages or emergencies. This work offers key insights into battery management strategies that enhance energy efficiency and profitability in multisite telecom networks within retail and curtailment markets. Building on Silva et al., we aim to develop an efficient optimization framework that incorporates additional safety rules and market constraints governing battery usage and energy trading. To address this, we first formulate the multi-battery management as a combinatorial optimization problem that ensures regulatory and safety compliance, while aiming to minimize energy costs and maximize profits from curtailment. We then introduce a novel ILP formulation for the problem and demonstrate its NP-hardness. Using Orange’s radio access sites data, results show that the problem becomes more challenging and optimal solutions are out of reach for large-scale instances. To deal with this, we develop a greedy heuristic that quickly produces good quality solutions. These serve as warm starts for the ILP solver, improving convergence and solution quality. We enhance this approach by incorporating metaheuristics and multiple neighborhood strategies to escape local optima and explore the solution space more effectively. Reinforcement learning is used to adaptively select the most promising moves based on past feedbacks, further boosting search efficiency and solution quality. This hybrid method combines heuristics, metaheuristics, machine learning, and ILP to deliver high-quality solutions suitable for real-world instances.

11:00
Dominance Properties for Electricity Distribution Planning with Battery Cycling Considerations
PRESENTER: Natalia Jorquera

ABSTRACT. In this work, we study collective self-consumption energy communities formed within residential buildings, where houses jointly invest in photovoltaic (PV) panels and a shared electricity storage system. Each house can satisfy its electricity demand using PV generation, the shared battery, or the main grid. Any surplus PV energy can either be stored in the battery or sold back to the grid; however, direct energy exchanges between houses are not permitted. Each house is equipped with a smart meter that monitors electricity consumption over time.

A central community manager is responsible for allocating the PV-generated electricity at each time step. The operational plan must satisfy key technical constraints: (i) a house cannot charge and discharge the battery simultaneously, and (ii) a house cannot buy and sell electricity to the grid within the same period.

We incorporate battery health considerations by explicitly tracking the number of charging cycles. A charging cycle is defined as the period between the transition from discharging to charging and the subsequent transition from charging to discharging. This feature allows us to better capture the long-term operational impact of storage usage.

We formulate this problem as a mixed-integer linear programming (MILP) model to determine an optimal electricity distribution plan for the community, accounting for battery cycling dynamics. Moreover, we exploit structural properties of the problem to show that it can be solved efficiently using dynamic programming techniques inspired by classical lot-sizing algorithms. Finally, we test our algorithms on real-life instances to validate their performance and practical applicability.

11:30
An iterative deal approach to increase self-consumption for a high-voltage energy community
PRESENTER: Baptiste Ferrato

ABSTRACT. Within an energy community, the electricity consumed is generally not correlated with the energy produced. Thus, two types of periods can occur: (1) a period when consumption exceeds production, meaning consumers must purchase the shortfall from a supplier, and (2) a period when consumption is lower than production, meaning producers must sell their surplus back to the grid. Limiting these periods of deficit and surplus would maximize self-consumption within the community, thereby reducing energy costs. The objective of this work is to establish a strategy to encourage companies to modify their consumption in order to make the best use of the energy produced within the community.

10:30-12:00 Session 11D: Invited session: Beyond Euclidean Convexity: Methods and Algorithms I
10:30
Gradient Descent and Flows with General Costs

ABSTRACT. How to go beyond the square distance $d^2$ in optimization algorithms and flows in metric spaces? Replacing it with a general cost function $c(x,y)$ and using a majorize-minimize framework I will detail a generic class of algorithms encompassing Newton/mirror/natural/Riemannian gradient descent/Sinkhorn/EM by reframing them as an alternating minimization, each for a different cost $c(x,y)$.

Rooted in cross-differences, the convergence theory to the infimum and to the continuous flow is investigated is based on a (discrete) evolution variational inequality (EVI) which enjoys similar properties to the EVI with $d^2$ regularizer. This provides a theoretical framework for studying splitting schemes beyond the usual implicit Euler in gradient flows on metric spaces (like the JKO scheme). This talk is based on the works \cite{ref2} with Flavien Léger (INRIA Paris), and \cite{ref} with Giacomo Sodini and Ulisse Stefanelli (Uni Vienna).

11:00
Synthetic Nonnegative Cross-Curvature and Spaces of Measures

ABSTRACT. Nonnegative cross-curvature is a geometric condition related to a pseudo-Riemannian metric defined on a space $X\times Y$ endowed with a cost $c:X\times Y\to \mathbb{R}$. Originally introduced in relation to the regularity theory of optimal transport, it found recently a surprising link with optimization, providing a convenient framework to generalize classical optimization algorithms. In order to work with infinite dimensional spaces and non-smooth costs, we introduced a synthetic formulation for nonnegative cross-curvature and generalized several known results. This condition is similar in spirit (and actually tightly related) to nonnegative curvature bounds à la Alexandrov. In particular, our main interest are spaces of (probability) measures. We showed that several distances and divergences satisfies this condition, such as the Hellinger and the Fisher-Rao distances or the Kullback-Leibler divergence. Our main result concerns however optimal transport costs: the space $(\mathcal{P}(X)\times \mathcal{P}(Y),\mathcal{T}_c)$ has nonnegative cross-curvature if and only if $(X\times Y,c)$ does as well, where $\mathcal{T}_c$ is the optimal transport cost related to $c$. Notably, this implies that the celebrated Wasserstein distance $W_2$ has non-negative cross-curvature, which opens up interesting perspectives related to optimization in the Wasserstein space.

11:30
Mirror and Preconditioned Gradient Descent in Wasserstein Space

ABSTRACT. As the problem of minimizing functionals on the Wasserstein space encompasses many applications in machine learning, different optimization algorithms on $\mathbb{R}^d$ have received their counterpart analog on the Wasserstein space. We focus here on lifting two explicit algorithms: mirror descent and preconditioned gradient descent. These algorithms have been introduced to better capture the geometry of the function to minimize and are provably convergent under appropriate (namely relative) smoothness and convexity conditions. Adapting these notions to the Wasserstein space, we prove guarantees of convergence of some Wasserstein-gradient-based discrete-time schemes for new pairings of objective functionals and regularizers. The difficulty here is to carefully select along which curves the functionals should be smooth and convex. We illustrate the advantages of adapting the geometry induced by the regularizer on ill-conditioned optimization tasks, and showcase the improvement of choosing different discrepancies and geometries in a computational biology task of aligning single-cells.

10:30-12:30 Session 11E: Invited session: Recent advances in Multi Objective Optimization: Theory meets Practice
10:30
Asymptotic behavior of penalty dynamics for constrained variational inequalities
PRESENTER: Siqi Qu

ABSTRACT. We propose a comprehensive framework for solving constrained variational inequalities via various classes of evolution equations displaying multi-scale aspects. We work on a three-operator formulation of a general class of variational problems, where a constrained equilibrium of the sum of two maximally monotone operators is requested over a domain which admits a representation of the set of zeroes of the single-valued monotone operator. This method has many applications in optimal control and optimization, in particular those of a hierarchical nature. Specifically, convex simple bilevel problems belong to the above setting, a challenging class of optimization which has received a lot of attention recently.

This talk is based on two parts. First, in an infinite-dimensional Hilbertian framework, we introduce a class of dynamical systems that combine Tikhonov regularization and exterior penalization terms in order to induce strong convergence of trajectories to least norm solutions in the constrained domain. Our construction thus unifies the literature on regularization methods and penalty-based dynamical systems. Later on, we present new ideas on how to approach the resolution of inclusion problems in the presence of stochastic data. We present a class of stochastic differential inclusions with semimartingale noise, and study its asymptotic convergence.

11:00
Tikhonov Regularized Exterior Penalty Methods For Hierarchical Variational Inequalities
PRESENTER: Meggie Marschner

ABSTRACT. We consider hierarchical variational inequalities consisting in a (upper-level) variational inequality whose feasible set is given by the solution set of a (lower-level) variational inequality. Purely hierarchical convex bilevel optimization problems and certain multi-follower games are particular instances of nested variational inequalities. Working within a real Hilbert space setting we combine Tikhonov and proximal regularization terms to develop a double loop algorithm with convergence guarantees towards a solution of the nested VI problem. A Krasnoselskii-Mann (KM) iteration serves as the inner loop scheme, involving a set of user-provided parameters that are updated over time. The involved fixed-point encoding map is user defined, where the Forward-Backward splitting and the Three Operator Splitting scheme are possible instances for the KM map. Defining two Gap functions, one measuring optimality and one measuring feasibility, we are able to derive ergodic convergence rates for the inner- and the upper-level. We present various applications that fit into our framework and some preliminary numerical results.

11:30
Flow of Gas Mixtures in Networks: Modeling and Multi-Objective Optimization
PRESENTER: Alena Ulke

ABSTRACT. As climate change necessitates a transition away from fossil fuels, hydrogen emerges as a promising alternative. Additionally, hydrogen-natural gas mixtures provide a viable interim strategy for reducing greenhouse gas emissions. As a result, there is a growing interest in modeling and optimizing the flow of these mixtures in networks. Operating these networks requires satisfying physical and safety constraints while balancing multiple objectives, such as maximizing profit, minimizing costs, increasing the hydrogen share, or reducing emissions, which often conflict with each other.

To address these challenges, we present a mathematical model, the so-called mixture model, that describes the flow of gas mixtures in networks. The model is based on the physical principles of the isothermal Euler equations and the mixing of incoming flow at nodes. Building on this model, we formulate a framework for multi-objective optimization problems that reflect the competing objectives occurring in gas network operation. This framework allows us to compute optimal compromises between the objectives and we show that for a certain class of objectives such optimal compromises exist. We also provide a numerical example to showcase the effectiveness of the proposed idea.

12:00
Probabilistic Robustness for Compressor Controls in Transient Pipeline Networks
PRESENTER: Hendrik Wilka

ABSTRACT. Uncertainty quantification plays a central role in the modeling and optimization of complex processes such as gas transport simulations. We present a novel strategy to measure the probabilistic robustness of deterministically computed controls in a pipeline network with uncertain gas demand.

Our approach starts by computing optimal controls for a deterministic gas transport problem, where we minimize the total control cost subject to box constraints on the pressure. We then evaluate the probability that the a priori calculated deterministic optimal control satisfies the uncertain pressure bounds in a scenario with randomly perturbed gas demand. To address this, we introduce buffer zones that tighten the pressure bounds in the deterministic scenario. The effect of these stricter bounds is then analyzed in the uncertain setting.

For the computation of the probability, we utilize kernel density estimation on random samples of the uncertain pressure. To reduce the computational effort, we combine this technique with sparse-grid-based stochastic collocation , where only a limited number of gas transport simulations are needed at selected points in the stochastic space.

We present numerical results of our approach applied to a gas network from a public gas library and discuss the effect of buffer zones.

10:30-12:00 Session 11F: Optimization and Geometrical considerations
10:30
Strong Total Cover and Mixed Dominating Set Problems: Polyhedra and Algorithms
PRESENTER: Luca Ferrarini

ABSTRACT. The Strong Total Cover Problem (STCP) is a novel covering problem defined on an undirected graph G = (V, E), where the goal is to find a minimum subset D ⊆ V ∪ E such that everyelement of V ∪ E is covered: a vertex covers itself and all incident edges, while an edge covers itself and its two endpoints. This model generalizes the classical Vertex Cover and Edge Cover problems, but the stricter covering rules make it fundamentally different from the well-studied Mixed Dominating Set Problem (MDSP), where a vertex dominates its neighbors and incidentedges, while an edge dominates its endpoints and adjacent edges. In this talk, we provide an overview of some results on STCP from polyhedral and algorithmic perspectives, while highlighting its connections to MDSP. We introduce the associated polytope, namely the Strong TotalCover Polytope, and derive several families of valid inequalities, some of which are shown to befacet-defining. In particular, we obtain complete linear descriptions for specific graph classes such as triangulated cactus graphs and cycles. From a computational perspective, we establish complexity results on general graphs and identify classes of graphs for which the problemcan be solved in polynomial time. Finally, we investigate approximation aspects of both STCP and MDSP through the study of the maximum subdeterminant of the corresponding constraint matrices.

11:00
Topology Optimization with Local Perimeter Constraints and Non-Linear Filtering for Minimum Length and Overhang Control
PRESENTER: Jose Torres

ABSTRACT. A major challenge in the transport industry is the design of lightweight structures that contribute to lower fuel consumption and to reduce the environmental impact. Topology optimization provides an effective approach to achieve such weight reduction while also minimizing design time and cost. However, the resulting geometries are often highly complex and typically require additive manufacturing for production.

Therefore, it is crucial to incorporate manufacturing constraints directly into the optimization process, such as enforcing minimum length scale and controlling overhangs, to avoid impractical shapes and volumes. The present work explores the integration of these constraints into topology optimization. Specifically, we propose the use of non-linear filtering techniques to penalize overhangs in the additive manufacturing context, while simultaneously enforcing a minimum length scale. In addition, this filtering approach is combined with a local perimeter measure, enabling the constraints to be applied in a more spatially localized manner.

Numerical results indicate that the proposed method successfully removes small-scale bar-like features and achieves vertically aligned structural elements, thereby enhancing manufacturability.

11:30
A Theoretical Framework for Grokking: Interpolation Followed by Riemannian Norm Minimisation
PRESENTER: Etienne Boursier

ABSTRACT. We study the dynamics of gradient flow with small weight decay on general training losses F. Under mild regularity assumptions and assuming convergence of the unregularised gradient flow, we show that the trajectory with weight decay exhibits a two-phase behaviour as weight decay parameter goes to 0. During the initial fast phase, the trajectory follows the unregularised gradient flow and converges to a manifold of critical points of F. Then, the trajectory enters a slow drift phase and follows a Riemannian gradient flow minimising the norm of the parameters. This purely optimisation-based phenomenon offers a natural explanation for the grokking effect observed in deep learning, where the training loss rapidly reaches zero while the test loss plateaus for an extended period before suddenly improving. We argue that this generalisation jump can be attributed to the slow norm reduction induced by weight decay, as explained by our analysis. We validate this mechanism empirically on several synthetic regression tasks.

10:30-12:00 Session 11G: Optimal control and biological models
10:30
Bang-Bang Light Control in Optogenetic Bioprocess Models: Optimal Protein Production in Yeast

ABSTRACT. In this talk, we revisit the optogenetic control framework introduced in [1], where light was used to regulate the unfolded protein response in Saccharomyces cerevisiae and proportional-integral-derivative (PID) feedback was shown to improve the yield of folded amylase. Building on this foundation, our recent study [2] investigated the corresponding optimal control problem in a batch culture setting. By reformulating the model and applying the Pontryagin Maximum Principle, we established that the optimal light input is of bang-bang type, a structure confirmed through numerical simulations and of potential practical interest. We will first summarize the key insights of this analysis, and then discuss ongoing efforts to extend the framework to more general bioprocess scenarios, including continuous cultures. Finally, we will see how the structure of the optimal solutions naturally raises the question of whether feedback-inspired strategies may be derived, opening perspectives for experimentally feasible implementations.

11:00
Optimal Control of the Coagulation-Fragmentation Equation

ABSTRACT. We introduce a novel class of controlled coagulation–fragmentation equations in which the control modulates the coagulation rate. To establish the existence of optimal controls, we adapt classical techniques from the theory of coagulation–fragmentation equations to account for the weak convergence of admissible controls. We then linearize the system, compute the gradient of the cost functional, and formulate a Pontryagin-type maximum principle via the associated adjoint equation. Moreover, we prove that the gradient is Lipschitz continuous, a property ensuring the convergence of gradient-based algorithms. Numerical experiments on (i) mitigating fine aerosol particles and (ii) concentrating mass within a prescribed size window demonstrate that a single low-dimensional actuator can effectively reshape an infinite-dimensional particle-size distribution.

11:30
Stability and Optimization of Perturbed Chemostat Systems

ABSTRACT. The chemostat system is a classical model for microbial evolution in controlled environments, famously predicting the competitive exclusion principle where only one species survives when competing for a single nutrient. Yet, real-world observations often defy this principle, showing coexistence of multiple species (though low-concentration). First, we demonstrate that this discrepancy can be explained by introducing a perturbation term into the system dynamics. Under biologically reasonable assumptions on the perturbation, we prove that the system remains globally asymptotically stable around a coexistence steady-state. This result is based on the application of the Malkin-Gorshin Theorem and Smith and Waltman results on perturbed steady-states. In this setting, we also address the problem of simultaneously optimizing two objective functions: the productivity of species, which has potential applications in industrial contexts, and biodiversity, quantified through the Simpson index. Our numerical results highlight how optimizing species productivity while maintaining a predefined threshold for biodiversity alters optimal strategies compared to those designed for productivity only as in previous works.

10:30-12:30 Session 11H: Stochastic optimization and control - part II
10:30
Approximately optimal distributed controls for high-dimensional stochastic systems with pairwise interaction through controls

ABSTRACT. We investigate high-dimensional stochastic optimal control problems involving many cooperating agents who aim to minimize a convex cost functional. We analyze two settings: a full-information model, where each agent observes the states of all others, and a distributed model, where agents only observe their own states. J. Jackson and D. Lacker already established a theoretical connection between these two models when agents interact through their state. In this context, they provide a sharp non-asymptotic bound on the gap between the value functions associated with these two problems. In other words, they quantify to what extent an optimal solution of the distributed control problem can be a good approximation for the optimal control of the full-information problem. Our paper builds on their work and provides similar results in the context of interaction through the controls. To derive these results, we follow the approach of J. Jackson and D. Lacker, employing a parallel theory of distributed stochastic control alongside the classical framework, characterizing optimizers through a Hamilton-Jacobi-type equation. The main difficulty of studying interaction through controls instead of interaction through states is that the optimal controls have a less tractable structure, as we lose the decoupling property of the N Hamiltonians. So far, we have to restrict our study to pairwise interaction through controls. The primary motivation for this study is the application to the management of the flexibility of the electrical system. Our model involves equipping each node of the electrical grid with small-scale local electricity storage, thereby easing the constraint of supply-demand balance. The goal is to optimize how much electricity each storage unit draws from or injects into the grid. The power flow in each transmission line is bounded by a maximum capacity, and this constraint can be modeled as a pairwise heterogeneous function of all control variables. As a result, the optimal control problem naturally falls within the framework developed in this work.

11:00
SAA Consistency for Optimal Control Problems with Almost Sure State Constraints

ABSTRACT. This talk is concerned with a class of risk-neutral stochastic optimization problems defined on a Banach space with almost sure conic-type constraints. This kind of problem appears in the context of optimal control with random differential equation constraints where the state of the system is further constrained almost surely. For this class of problems, we investigate the consistency of optimal values and solutions corresponding to sample average approximation (SAA) as the sample size is taken to infinity. Consistency is also shown in the case where a Moreau-Yosida-type regularization of the constraint is used. The existence of Lagrange multipliers can be guaranteed under Robinson's constraint qualification with an appropriate choice of function space for the constraint. Our assumptions allow us to also show consistency of SAA Karush-Kuhn-Tucker conditions. This work provides theoretical justification for the numerical computation of solutions frequently used in the literature and in experimentation.

11:30
Optimal Control under Uncertainty with Joint Chance State Constraints: Almost-Everywhere Bounds, Variance Reduction, and Application to (Bi)linear Elliptic PDEs

ABSTRACT. We study optimal control of partial differential equations (PDEs) under uncertainty with the state variable subject to joint chance constraints. The controls are deterministic, but the states are probabilistic due to random variables in the governing equation. Joint chance constraints ensure that the random state variable meets pointwise bounds with high probability. For linear governing PDEs and elliptically distributed random parameters, we prove existence and uniqueness results for almost-everywhere state bounds. Using the spherical-radial decomposition (SRD) of the uncertain variable, we prove that when the probability is very large or small, the resulting Monte Carlo estimator for the chance constraint probability exhibits substantially reduced variance compared to the standard Monte Carlo estimator. We further illustrate how the SRD can be leveraged to efficiently compute derivatives of the probability function, and we discuss different expansions of the uncertain variable in the governing equation. Numerical examples for linear and bilinear PDEs compare the performance of Monte Carlo and quasi-Monte Carlo sampling methods, examining probability estimation convergence as the number of samples increases. We also study how the accuracy of the probabilities depends on the truncation of the random variable expansion, and we numerically illustrate the variance reduction of the SRD.

12:00
Probabilistic Groundwater Control

ABSTRACT. In this talk we consider a probabilistic control of ground water level in the soil of some area between two parallel channels (Dirichlet boundary conditions). The ground water level is influenced by a meteorological source term involving the net difference between time dependent precipitation and evaporation over ground, \cite{mor1}. The aim is to keep the ground water level in the middle of the area in a given tolerance from a nominal level by applying a suitable control. Since the source term is uncertain, we consider a worst case approach by requiring that the state constraint (evaluated at one specific point) had to be kept in a certain region of tolerance for all times in the considered horizon and for no matter which realization of the source term within a specified set. The big challenge in dealing with probabilistic constraints consists in the fact that no analytical formula for the value and gradient of the probability function is available in general. A successful tool to achieve this goal in the context of random vectors with elliptically symmetric (in particular Gaussian) and related distributions (e.g., log-normal, truncated Gaussian, mixture of Gaussian) is the so-called spheric-radial decomposition (SRD), which allows the simultaneous computation of values and gradients by means of spherical integrals, \cite{ack1, ack2, shaker,hantoute}. Moreover, we will discuss the derivation of an analytic formula for the control-times-uncertainty-to state operator and possibly nonsmooth necessary optimality conditions.

10:30-12:00 Session 11I: Multilevel optimization
Chair:
10:30
A spatial branch-and-price-and-cut algorithm for the continuous network design problem
PRESENTER: David Rey

ABSTRACT. Transportation network design, or the problem of optimizing infrastructure for a societal goal, subject to individual travelers optimizing their behavior for their own preferences arises frequently in many contexts. However, it is also an NP-hard problem due to the leader-follower or bi-level structure involving a follower objective that is different from yet significantly affects the leader objective. Creating exact methods has been particularly difficult for the continuous network design problem (CNDP), in which leader variables are continuous, because of the many nonlinearities arising therein. We present an exact algorithm for the CNDP based on using the high-point relaxation, i.e., the system optimal CNDP, and value function cuts to find lower bounds and solving the traffic assignment follower problem to find upper bounds. We introduce a convex relaxation and a spatial branching scheme to handle the non-convexity of value function cuts. This leads to a spatial branch-and-cut algorithm that gradually cuts out bi-level infeasible points from the feasible region of the CNDP. To enhance computational performance, we outer-approximate nonlinear convex functions and use column generation to obtain a sequence of linear programs that can be solved relatively quickly. We show that, for a predefined epsilon, our spatial branch-and-price-and-cut algorithm converges to epsilon-optimality. Compared to prior work on exact methods for CNDP, we can find epsilon-optimal solutions for the same small test networks in much less time, or solve CNDP on problem instances based on networks that are two orders of magnitude larger than those used in the literature.

11:00
A Bilevel Optimization Model to share Decarbonization Efforts between French Regions : some qualitative Results
PRESENTER: Antoine Marques

ABSTRACT. Since 2015, french regions have been responsible for the decrease of their own Greenhouse Gas (GHG) emissions. However, both a lack of coordination between the regions and an inadequation between the sum of the regional objectives and the national objective are observed. That's why a tool that considers the different objectives of the State and the Region, their characteristics, a global techno-economical efficiency and the social acceptance is required.

Our work uses a bilevel optimization model that provides decarbonization trajectories to 2050 by region and by sector. It uses the model developed by P. Vogler et al. [1] but focuses on the residential sector. The model defines one leader - the State - and multiple followers - the regions - each of them having different utility function. The State chooses its strategy first and anticipates the reaction of the regions, resulting in a Stackelberg Equilibrium. To solve the problem, KKT reformulation, linearization with Big M method, and branch-and-bound algorithm (CPLEX) are used. We add Tikhonov's regularization, refine the term of “Quality of service” and build a systematical verification of the KKT conditions.

The model gives the evolution of the political levers (thermal insulation, heating system, sobriety) for each region with regular trajectories. Several testcases show the realistic sensitivity of the model to constraints such as the reduction of GHG emissions quota. It also shows how the different wealth of regions can influence their transition. The global cost of the transition for the residential sector is lower and the quality of service higher in our simulations compared to a GDP/inhabitant criteria. Then the political levers must be adapted to the characteristics of each region, such as their energy intensity.

Last conclusion: the quickest are the investments, the cheapest is the transition. 

References

[1]: Prunelle VOGLER, Anatole DESTHIEUX and Quentin PETIT, The Share of Decarbonization Efforts between French Regions: A Bilevel Optimization Model, PGMO Days 2024

11:30
Trilevel Optimization for Demand Response in Electric Vehicle Charging Networks
PRESENTER: Rita Safi

ABSTRACT. This work presents a hierarchical decision-making framework for Demand Response (DR) involving three agents: the Electric Vehicle Aggregator (EVA), the Charging Station Operator (CSO), and the Electric Vehicle (EV). The EVA incentivizes the CSOs to participate in DR, i.e., to adjust their consumption during specific periods in response to an incentive payment, while the EVA ensures that the aggregated contribution of all CSOs in DR meets the requirement imposed by the distribution system operator. To adjust their consumption, CSOs design a price menu that encourages EVs to remain longer at the charging station at lower charging prices, thereby creating flexibility for load adjustment.

The problem is formulated as a trilevel optimization framework with a coupling constraint among the CSOs. The lower and intermediate levels are reformulated into a single Mixed-Integer Quadratic Program (MIQP), transforming the trilevel into a single-leader multi-follower game. Finally, solution approaches based on Stackelberg game algorithms are explored to address this problem.

10:30-12:30 Session 11J: Energy Systems
10:30
Estimating Maintenance Cost of Offshore Electrical Substations

ABSTRACT. France aims to deploy 45 GW of offshore wind capacity by 2050. Both ownership and maintenance of the offshore substations linking these farms to the grid are the responsibility of the French Transmission System Operator (TSO). In the event of an unscheduled substation shutdown, the TSO must pay significant penalties to producers. Failures when weather conditions prevent access to the substation can quickly snowball into huge losses.

Although the optimization of maintenance for onshore and offshore wind turbines has previously been studied, the specific challenges of offshore substations highlighted above have not been dealt with. The problem is modeled as a Markov Decision Process, where each state reflects the asset's degradation level and ongoing maintenance, and actions correspond to maintenance decisions. The objective is to optimize maintenance planning to minimize penalties, which are proportional to curtailed energy when capacity is limited. Weather scenarios are incorporated to account for their impact on both power production and maintenance feasibility. A multihorizon stochastic optimization model is introduced, featuring a bimonthly strategic horizon for advance planning and a daily operational horizon to capture the penalty dynamics under uncertain weather conditions. A significant challenge arises from the exponential growth of the state space with the number of components, making it impossible to solve the optimization problem exactly for real-life substation models. To address this, we propose an approximate solution approach that yields maintenance policies with strong performance.

11:00
Bridging classical optimization and reinforcement learning in power systems: a value-based approach

ABSTRACT. Classical hydrothermal dispatch models determine optimal operation policies through Stochastic Dynamic Programming (SDP), where the Bellman value function represents the long-term opportunity cost of storable resources such as water or energy. The marginal value of storage is typically estimated via numerical differentiation of the value function with respect to storage levels, which requires explicit discretization of the state space. As power systems expand, adding reservoirs, batteries, and multi-area interactions, this discretization becomes increasingly burdensome and limits model scalability, particularly under realistic technical constraints and temporal coupling.

An alternative formulation based on Reinforcement Learning (RL) is presented, in which the agent is not trained to learn optimal dispatch actions directly. Instead, the agent is trained to estimate the value of storable resources, such as the opportunity cost of water in hydro reservoirs or energy in batteries. These learned value functions are subsequently used as input in deterministic optimization problems solved at each decision stage, ensuring feasibility and consistency with system constraints. Gradient-based policy algorithms, specifically Actor-Critic and Proximal Policy Optimization (PPO), are employed and adapted to focus on state value estimation rather than direct policy learning.

This hybrid approach retains the interpretability and robustness of classical optimization while incorporating the scalability and adaptability of modern learning techniques. Numerical experiments in medium-scale power systems confirm the feasibility of this value learning strategy and demonstrate its potential to significantly reduce computational requirements while maintaining high-quality operational outcomes.

This methodology is particularly suitable for systems of intermediate scale, such as the Uruguayan power system, where validation against exact methods remains feasible. As a key contribution, the proposed approach has been implemented and compared to traditional SDP-based methods using a realistic model of the Uruguayan system. The results highlight its capacity to replicate the economic signals of classical models while offering a more scalable and flexible framework. Furthermore, the proposed methodology provides a foundation for extensions in long-term planning, investment analysis that involves storage technologies, and operational strategies in non-deterministic renewable generation patterns.

11:30
Airports Energy Optimization with Optimal Control and AI

ABSTRACT. We address the problem of energy-efficient control of Heating, Ventilation, and Air Conditioning (HVAC) systems in airports which account for more than 40% of their overall consumption. For this task, classical PID (Proportional Integral Derivative) controllers are generally used but do not explicitly account for energy consumption and lack predictive capability regarding future disturbances. As airports represent an especially demanding case due to large-scale nonlinear dynamics and exogenous factors such as weather and passenger flows, there is a need for new control algorithms dedicated to the HVAC.

To do so, we formulate the task as a Constrained Rolling Horizon Optimal Control problem where the objective is to reduce energy use while ensuring comfort and respecting operational constraints. We focus on direct multiple-shooting transcription to obtain a Non-Linear Problem, which can then be solved via Sequential Quadratic Programming or Interior-Point methods as they naturally handle constraints and scale to large systems. It provides a solid baseline compared with PID controllers and our preliminary results confirm measurable gains in efficiency while maintaining comfort.

The main challenges lie in obtaining accurate information about the system dynamics and handling uncertainty. First, we investigate parametric estimation of equipment consumption in order to construct a cost function directly from operational data. Secondly, we consider grey-box approaches such as Autoregressive Models for interpretability and black-box parameterizations such as recurrent and graph-based Neural Networks to capture nonlinear spatio-temporal couplings between airport zones. In parallel, we study both statistical and learning-based approaches for uncertainty estimation. Among these, Variational Autoencoders (VAEs) appear as a promising direction to capture latent variability and generate probabilistic forecasts to be integrated into predictive control. We also plan to explore robust and stochastic model predictive control to compare worst-case guarantees with probabilistic estimation performance.

This aims to contribute both methodologically and practically by establishing scalable strategies for embedding data-driven models into Optimal Control. One hopes that the developed methods can generalize across airports with diverse topologies and operating conditions.

12:00
Implementation of a Multi-year Generation and Transmission Expansion Planning Problem in an Industrial Context
PRESENTER: Thomas Bittar

ABSTRACT. As part of its public service mission, RTE (Réseau de Transport d'Électricité), the French transmission system operator, carries out prospective studies to assess the security of supply of electricity and to enlighten national strategic orientations for the evolution of the power system.

Because prospective studies aim at defining an economically efficient and low-carbon generation mix together with appropriate network development, they involve solving generation and transmission expansion planning problems. For this task, RTE relies on an internally developed open-source software, Antares-Xpansion. Currently, Antares-Xpansion addresses a two-stage optimization problem over a one-year horizon, where an investment decision (first stage) is made at the outset, followed by system operation (second stage) on an hourly time step, across several scenarios that capture operational uncertainties (demand, hydro inflows, generator availability).

In practice, investment decisions are path-dependent: building the power system of 2050 requires decisions to be made and implemented already in 2030, 2040, and so forth, each of which impacts system operations at those horizons. Such trajectory constraints stem primarily from financial limitations and from industrial realities - Rome was not built in a day! Moreover, these decisions are made amid structural uncertainties concerning key parameters (electricity consumption, location of new renewable power plants), for which only ranges can be anticipated.

Currently, Antares-Xpansion can only address single-year problems. Thus, investment trajectories must be constructed manually by solving the model sequentially for different individual years. This procedure leads to a myopic representation, unable to anticipate investment needs.

To improve the process, we implement a multi-year investment model in Antares-Xpansion, where investment decisions are taken on a scenario tree. The focus is to show how an efficient implementation has been carried on, taking into account constraints from the industrial context. How to efficiently reuse the Antares-Xpansion code that already handles the investment problem with a single first stage ? How was user interaction and data gathering extended from a single-horizon study to a multi-horizon study?

We also demonstrate how this implementation was validated using a realistic case study involving the resolution of an optimization problem involving hundreds of millions of variables.

12:00-12:30 Session 12A: Kriging methods for PDEs
12:00
Optimal Linear Interpolation under Differential Information: application to prediction of perfect flows

ABSTRACT. Approximation of functions satisfying partial differential equations is paramount for simulation of physical fluid flows and other problems in physics. Recent studies related to physics-informed machine learning [1] have proven useful as a data-driven complement to numerical models for Partial Differential Equations (PDEs). However, their effciency and convergence depend on the availability of vast training datasets. For sparse observations, Gaussian process regression or Kriging [2, 3] has emerged as a powerful interpolation model, offering principled estimates and uncertainty quantifcation. Several attempts have been made to adapt Gaussian processes conditioned on linear PDEs via artifcial or collocation observations [4] and kernel design. These methods suffer from scalability issues in higher dimensions and limited generalizability. The aim of our study is to explore the extension of the Kriging predictor - the best linear unbiased predictor - in the presence of linear PDE information at finite collocation points. We propose two approaches: 1) A collocated co-Kriging [3, Ÿ24.2] with primary observations of the physical field and auxiliary differential observations; 2) A constrained Kriging optimization problem strongly satisfying linear PDE constraints at the points of prediction through a Lagrangian formulation. This work highlights a trade-off between the computational efficiency of the Lagrange multipliers approach and the strict interpolation of observations.

12:00-12:30 Session 12B: More on Quantum computing
12:00
From Cables to Qubits: A Decomposed Variational Quantum Optimization Pipeline

ABSTRACT. The Cable Routing Optimization Problem (CROP) is a multi-flow routing task central to industrial layouts and smart manufacturing installations. We formulate CROP as a cable-wise separable, block-diagonal Quadratic Unconstrained Binary Optimization Problem (QUBO) and derive conservative penalty bounds that preserve feasibility. Exploiting this structure, we introduce a decomposition pipeline that builds one QUBO per cable, transform each QUBO into a Hamiltonian and solves the subproblems with Variational Quantum Eigensolver (VQE). Finally the solutions per cable are merged into a global routing assignment. This procedure reduces the per-run qubits from the full problem size to those of a single cable subproblem. We test our performance on different cable routing optimization problems varying in size utilizing Qiskit’s SamplingVQE. Our findings indicate that a decomposed VQE approach attains feasible, and optimal, layouts over a range of cable-routing problems.

12:00-12:30 Session 12C: Lyapunov functions
12:00
A Parametric Optimization Approach To Comparison Functions

ABSTRACT. In this presentation, we consider an extension of the construction of Hahn of comparison functions. Comparison functions serve to prove stability in the theory of dynamical systems and aim to compare positive definite functions with norms. Hahn established that continuous positive definite functions are equivalent (from comparison functions point of view) to norms. The key idea of the Hahn proof is the construction of two functions written as the optimal values of a parametric optimization problem. The parametric constraints set for the first function is a closed ball (centered at 0) with a variable radius. The second parametric constraints set is the complement of the open ball (centered at 0) with a variable radius. We propose to study the comparison function properties of a generalization of this construction, considering a general function instead of the norm of in the parametric constraint sets. We also discuss the potential benefits of this approach for approximating optimization problems.

12:00-12:30 Session 12D: Uncertainty quantification for Energy Applications
12:00
A probabilistic approach for a local day-ahead network congestion signal
PRESENTER: Alexandre Marié

ABSTRACT. In order to accelerate the connection of renewable energies and flexibility resources while limiting costs for the community, RTE--the French transmission system operator--has adopted a strategy of optimal grid sizing, which consists of operating the grid as close as possible to its limits. To this end, RTE aims to better anticipate congestion risks at the local level on D-1.

Short-term network security analysis studies generally involve considering the best forecasts for consumption and renewable energy production (wind and solar) on each bus and simulating the network, with load flow calculations and sensitivity analysis, in the states N ("pre-contingency") and N-1 or N-k ("post-contingency"). The result is a deterministic value for the flows on each line, for each state, which can be compared with the Permanent Admissible Transmission Loading (PATL). In this paper, we propose to construct a D-1 congestion signal using a new security analysis approach consisting of calculating a probability distribution of flows for each line and each network state, taking into account historical forecast errors (consumption, wind and solar).

The proposed approach consists in identifying, for each monitored branches, the influential load or renewable generation assets on which we apply an uncertainty model based on historical forecast errors, centered on our best forecast. The resulting probability distributions are then combined, with Probabilistic DC load flow approach using Monte Carlo Simulations and Schaake Shuffle technique, to estimate a probability distribution of the flow. The position of the branch PATL in relation to this distribution is then used to construct the congestion signal.

This approach was evaluated using historical data from an area in Picardie (France) where 46 MW of wind energy is connected. By comparing our congestion signal calculated based on the D-1 grid forecast with actual flows, we were able to assess the relevance of our approach for this use case.

12:00-12:30 Session 12E: Mixed integer non-linear optimization and application
12:00
On Constructing Optimal Risk Scores via Mathematical Optimization

ABSTRACT. The widespread adoption of Machine Learning (ML) in sensitive domains such as healthcare, criminal justice, and policy-making has emphasized the importance of interpretability for fostering trust [Rudin et al. 2022] and meeting regulatory requirements [Goodman and Flaxman 2017]. Mathematical Optimization has proven to be a powerful approach to designing interpretable ML models, as is the case for Risk Scores [Ustun and Rudin 2019, Liu et al. 2022]. Risk Scores—simple linear classifiers based on logistic regression— stand out for their transparency since users can easily compute predictions with integer additions.

Recent work [Molero-Río and D'Ambrosio 2025] proposes a convex Mixed-Integer Non-Linear Optimization (MINLO) formulation to handle the construction of risk scores. However, the resulting MINLO problems are computationally demanding, due to their large number of integer variables and constraints, thus posing a major challenge for existing solvers even for small-sized instances. In this talk, we will revisit the work in [Molero-Río and D'Ambrosio 2025] and propose alternative solution strategies. In particular, we will leverage the convexity of the original problem and explore an outer-approximation method. The practical relevance of our approach is tested in both synthetic and real-world datasets.

12:00-12:30 Session 12F: Linear Matrix Inequalities
12:00
Solving Parametric Linear Matrix Inequalities and Convergence Analysis in Optimization
PRESENTER: Weijia Wang

ABSTRACT. We consider linear matrix inequalities (LMIs) A = A₀ + x₁A₁ + ... + xₙAₙ ≥ 0 with the Aᵢ's being m×m symmetric matrices, with entries in a ring R. When R = ℝ, the feasibility problem consists in deciding whether the xᵢ's can be instantiated to obtain a positive semidefinite matrix. When R = ℚ[y₁, ..., yₜ], the problem asks for a formula on the parameters y₁, ..., yₜ, which describes the values of the parameters for which the specialized LMI is feasible. This problem can be solved using general quantifier elimination algorithms, with a complexity that is exponential in n. In our paper, we leverage the LMI structure of the problem to design an algorithm that computes a formula Φ describing a dense subset of the feasible region of parameters, under genericity assumptions. The complexity of this algorithm is exponential in n, m and t but becomes polynomial in n when m and t are fixed. We apply the algorithm to a parametric sum-of-squares problem and to the convergence analyses of certain first-order optimization methods, which are both known to be equivalent to the feasibility of certain parametric LMIs, hence demonstrating its practical interest.

12:00-12:30 Session 12G: Complementarity Problems
Chair:
12:00
A robust method for complementarity problems using a geometric viewpoint

ABSTRACT. This talk deals with algorithms to solve complementarity problems (CPs) of the form 0 <= F(x) _|_ G(x) >= 0 (1) where F and G are smooth. The reformulation by C-functions transforms (1) into a system H(x) = 0. These approaches benefit from good local convergence properties (one linear system per iteration). Global convergence has been the topic of many contributions, particularly those using the Fischer C-function. Indeed, the associated merit functions are often differentiable, which allows the algorithms to use their gradient if/when needed. Another C-function is the minimum, though its nonsmoothness hinders its globalization. However, its properties (finite termination for LCPs, smaller differential, good performance) motivate its study. In general, simply solving a linear system may not yield a descent direction of the least-square merit function when far from a solution. To ensure this, a recent method proposes to find a direction in a polyhedron instead of solving a linear system. This method can only converge to points verifying some regularity conditions, as the standard Newton method. To prevent this drawback, we propose a weighted Levenberg-Marquardt approach which reaches a weakly stationary point. Naturally, since the considered problem is nonsmooth nonconvex and without assumptions, such result requires some combinatorial operations, generally of small size – thus they should not be a major difficulty in practice. These are related to the combinatorial geometry of zonotopes.

12:30-14:00Lunch Break
15:00-15:30Coffee Break
15:30-17:30 Session 14A: Invited session: Optimal geometric control
15:30
A COUNTEREXAMPLE TO SMOOTHNESS OF ENERGY-MINIMIZERS IN SUB-RIEMANNIAN GEOMETRY

ABSTRACT. In this talk, we present some recent results concerning the regularity of the curves minimizing the sub-Riemannian energy, also called sub-Riemannian geodesics (they are also length-minimizing curves that are parameterized by the arc-length). In particular, we discuss the first example of a non-smooth geodesic, which disproves a longstanding open conjecture. The example we exhibith is a class $C^2$ but not $C^3$ length-minimizer of a real-analytic (even polynomial) sub-Riemannian structure. Moreover, we can prove that sub-Riemannian structure of the counterexample belongs to a class of polynomial sub-Riemannian structures where geodesics are of class $C^2$.

16:00
Finite-Time Stabilization via Optimal Control Techniques
PRESENTER: Weihao Weng

ABSTRACT. Given a linear control system $\dot x = Ax+Bu$, every asymptotically stabilizing feedback controller $u=Kx$ may be obtained by solving an optimal control problem with quadratic integral cost, i.e., the matrix $K$ is such that $A+BK$ is Hurwitz if and only if the feedback $u=Kx$ solves a linear-quadratic problem associated with the linear control system.

On the other hand, (nonlinear) finite-time stabilizing feedback controllers for linear control systems have been obtained in the framework of sliding mode control. The expression of these feedbacks relies on scalar gains which are not explicitly known.

The present work is an attempt to determine these gains through an inverse optimal control approach in analogy with the linear-quadratic case. For this purpose, consider a specific class of optimal control problem (OCP) where the datas satisfy some general assumptions . Existence (and uniqueness) of a minimizer $u^*$ for (OCP) leads to the characterization of the associated optimal trajectory $x^*$ using the maximum principle.

We next leverage the homogeneity of the cost function and by extension, of the associated value function to establish the finite-time convergence of $x^*$ toward the origin. Finally, after showing regularity properties for the value function we prove that it satisfies a Hamilton-Jacobi-Bellman equation on $\mathbb{R}^n$.

16:30
Control of Collision Orbits
PRESENTER: Riccardo Daluiso

ABSTRACT. In the n-body problem, the forces acting between particles approach infinity when the mutual distances approach zero. Therefore, at collision the dynamics has singularities. Since Levi-Civita and Sundman, the double collision has been regularized, i.e. the singularity has been made to disappear by means of algebraic transformations. Levi-Civita obtained first a regularizing transformation of the Kepler problem based on the inverse of the map z → z^2 of the complex plane. This conformal map sends the orbits of the harmonic oscillator in the ones of the Kepler problem. The coordinates transformation is coupled with a slowing down of the motion by means of a time change defined by the differential relation dτ = 1/|z|dt. Based on classical results, the purpose of the work is the application of the regularization theory to optimal control. We consider the control of a spacecraft under the attraction of a celestial body, where the control is the thrust, and with the goal of minimizing the physical time. This system exhibits singularities in zero, and thus the theory fails in the study of collision orbits. By applying the Levi-Civita regularization to the controlled system, we obtain an affine system in the new time τ where the vector fields are polynomials, thus C^ω. In the reparametrized system, we can broaden the concept of controllability at collision, and apply Pontryagin’s principle by allowing orbits that collide or come arbitrarily close to the center of attraction. The extension of the procedure in three dimensions is not straightforward, as it encounters algebraic and topological obstructions. To extend the study to the spatial case, we need to make use of the so-called KS regularization. However, the KS-map is based on the Hopf fibration so it is a transformation R^4 → R^3, and to each point in the Keplerian orbit corresponds a S1-orbit in the reparametrized system. This entails the addition of one dimension to the phase space, which is now seven-dimensional

17:00
Chattering in Homogeneous Linearly Controlled Systems
PRESENTER: Nicola Paddeu

ABSTRACT. The phenomenon of chattering appears in various areas of applied optimal control, as for example quantum control, medicine, and aerospace. We say that an optimal trajectory chatters if it is the unique optimal trajectory joining two points and its control has infinitely many points of discontinuity. We investigate the problem of chattering in homogeneous linearly controlled systems, with control taking value on a polyhedron. For such systems, we give a quantitative bound on the times that an optimal control can switch face of the polyhedron. As a consequence, we show that the control of an optimal trajectory cannot switch infinitely many times between two different faces. Our argument is based on sub-differential calculus and Lie group theory.

15:30-17:00 Session 14B: Invited session: Beyond Euclidean Convexity: Methods and Algorithms II
15:30
Conditional Infimum, Hidden Convexity and One-Sided Linear Couplings
PRESENTER: Michel De Lara

ABSTRACT. This talk is part of the invited session \emph{Beyond Euclidean Convexity: Methods and Algorithms} (organizers: Pierre-Cyril Aubin and Michel De Lara)

Detecting hidden convexity is one of the tools to address nonconvex minimization problems, and find global minimizers. We introduce the notion of conditional infimum, develop the theory, and establish a tower property, relevant for minimization problems. We connect the conditional infimum with one-sided linear and subadditive couplings. Then, we illustrate how the conditional infimum is instrumental in revealing hidden convexity. Thus equipped, we provide a new sufficient condition for hidden convexity in nonconvex quadratic minimization problems, that encompasses and goes beyond known results (with the notion of blocked-sign pair matrix-vector). We also show how the conditional infimum is especially adapted to tackle the so-called S-procedure.

16:00
A Characterization of Capra-Convex Sets
PRESENTER: Adrien Le Franc

ABSTRACT. We focus on a specific form of abstract convexity known as Capra-convexity, where a constant along primal rays (Capra) coupling replaces the scalar product used in standard convex analysis to define generalized Fenchel-Moreau conjugacies. A key motivating result is that the l0 pseudonorm --- which counts the number of nonzero components in a vector --- is equal to its Capra-biconjugate. This implies that l0 is a Capra-convex function, highlighting potential applications in statistics and machine learning, particularly for enforcing sparsity in models. Building on prior work characterizing the Capra-subdifferential of l0 and the role of source norms in defining the Capra-coupling, we present in this talk a characterization of Capra-convex sets.

16:30
One-Sided Linear Couplings with Application to Sparse Optimization

ABSTRACT. When a one sided linear (OSL) coupling is substituted for the usual duality product in the Fenchel-Moreau conjugacy, OSL-convex functions are substituted for the (closed) convex functions. OSL-convexity is accompanied by the generalization of the usual subdifferential and the associated minimization algorithms from usual convex optimization (Fermat's rule, proximal methods, cutting plane methods, ...). In fact, it has been proven that OSL-convex functions are convex composite. Algorithms and optimality conditions for the minimization of convex composite functions have been substantially studied in the literature. Here, we present OSL convexity and we study the difficulties for the OSL-minimization of the notorious sparsity inducing l0-pseudonorm, which happens to be OSL-convex for the so-called Capra coupling.

15:30-17:00 Session 14C: Invited session: Dynamic approaches to saddle point problems
15:30
A Dynamical Approach to Solving Smooth Saddle Point Problems
PRESENTER: Zepeng Wang

ABSTRACT. We develop two inertial primal dual dynamical systems to solve smooth saddle point problems with bilinear coupling. The systems are developed based on Nesterov's method, and include both time scaling and Hessian-driven damping. We derive a fast sublinear convergence rate for convex-concave functions, and an accelerated linear convergence rate for strongly convex-strongly concave functions, in terms of the primal-dual gap. We also apply our results to a constrained convex optimization problem with linear equality constraints. These results may provide insights into the design of inertial primal dual algorithms.

16:00
Preconditioned Primal Dual Algorithms in Convex Optimization
PRESENTER: Huiyuan Guo

ABSTRACT. We study a family of preconditioned primal dual algorithms for convex-concave saddle point problems by the dynamics introduced in \cite{apidopoulos2025preconditioned}. For algorithms of proximal type, which have a simple form in the case of linear constrained problems, we establish a non ergodic linear convergence rate. These algorithms are different from the primal dual algorithms in \cite{chambolle2011first} and \cite{luo2022primal}. For algorithms of gradient type or with a gradient type subiteration, we establish a non ergodic convergence rate of $\mathcal{O}(1/k)$ and show that the sequence of iterates weakly converges to a primal dual optimal solution. Finally, we present numerical experiments to indicate our well performed preconditioned primal dual algorithms.

16:30
Preconditioned primal-dual dynamics in convex optimization: non-ergodic convergence rates
PRESENTER: Cesare Molinari

ABSTRACT. We introduce and analyze a continuous primal-dual dynamical system in the context of the minimization problem f(x) + g(Ax), where f and g are convex functions and A is a linear operator. In this setting, the trajectories of the Arrow-Hurwicz continuous flow may not converge, accumulating at points that are not solutions. Our proposal is inspired by the primal-dual algorithm, where convergence and splitting on the primal-dual variable are ensured by adequately preconditioning the proximal-point algorithm. We consider a family of preconditioners, which are allowed to depend on time and on the operator A, but not on the functions f and g, and analyze asymptotic properties of the corresponding preconditioned flow. Fast convergence rates for the primal-dual gap and optimality of its (weak) limit points are obtained, in the general case, for asymptotically antisymmetric preconditioners, and, in the case of linearly constrained optimization problems, under milder hypotheses. Numerical examples support our theoretical findings, especially in favor of the antisymmetric preconditioners.

15:30-17:30 Session 14D: Unit Commitment problems
15:30
QUBO for the Min-Up/Min-Down UCP: Penalty Tuning and Decomposition Approach
PRESENTER: Enzo Soler

ABSTRACT. In the context of the energy transition and the growing complexity of power systems, the Unit Commitment Problem (UCP) remains a central challenge in operational planning. It involves determining the optimal schedule for activating power generation units to meet the electricity demand. while minimizing operational costs. Each unit must satisfy minimum uptime (resp. downtime) constraints. The Min-up/min-down UCP (MUCP) is strongly NP hard. Some more technical constraints have also to be taken into account, e.g. ramp constraints. In industrial practice, the UCP is typically addressed using Lagrangian relaxation techniques to decompose the problem into independent subproblems, thus significantly improving computational performance and enabling parallel computing. This work investigates the potential of a quantum variational algorithm to address the MUCP through decomposition techniques, leveraging Quadratic Unconstrained Binary Optimization (QUBO) formulations to solve subproblems efficiently. First we conduct a comparative analysis of penalty coefficient determination methods from the literature. Then we propose a novel hierarchical and normalized penalization strategy tailored to MUCP instances, thus extending existing methods to handle multiple types of constraints. A column generation is considered as an alternative decomposition strategy for solving the MUCP. Integrating quantum computing into this framework raises several interesting questions. One the one hand, quantum solvers may prematurely terminate the column generation process due to suboptimal column selection. On the other hand, their Hamiltonian-based design inherently promotes solution diversity, thus yielding a pool of columns that may still benefit the master problem. A sensitivity analysis of quantum annealing to pricing signals reveals threshold behaviors that characterize the solver’s response to variations in dual values. Understanding these behaviors is crucial for designing efficient hybrid decomposition strategies. 

16:00
Enhanced Benders' Decomposition for Stochastic Unit Commitment Using Interval Variables
PRESENTER: Mathis Azema

ABSTRACT. The Unit Commitment (UC) problem is a fundamental challenge for energy producers and network operators aiming to determine an optimal generation schedule for electricity production units, such as thermal and hydro units, while adhering to technical and economic constraints. A major source of complexity in UC arises from uncertainty, particularly in forecasting energy demand and wind power generation. To address this, decomposition strategies such as Lagrangian relaxation and Benders' decomposition have been studied to efficiently handle many scenarios.

Recently, a new formulation of the deterministic UC problem based on interval variables has been proposed to better model unit production decisions. Building on this approach, we introduce a novel Benders' decomposition that leverages interval variables to generate stronger cuts enabling convergence in a few number of iterations. Through computational experiments, we demonstrate that our method significantly improves solution efficiency, enabling the resolution of large-scale instances. In particular, we show that our approach scales effectively for two-stage stochastic UC with many scenarios and performs efficiently under both a risk-neutral framework and an average value-at-risk measure.

16:30
Modelling Dynamical Constraints and Maintenance within Aggregated Unit Commitment Problem using Mixed Integer Linear Programming
PRESENTER: Hugo Thomas

ABSTRACT. In this presentation, a long-term dispatch problem is formulated as a unit commitment model that explicitly embeds the dynamic constraints of thermal power units - minimum operating power, minimum up/down times, start-up costs and ramp limits - which determine short-term flexibility. To contain the combinatorial growth of binary decisions over a week at hourly resolution and on a European spatial scale, similar units are grouped into thermal aggregates and the problem is expressed with aggregated variables. Assuming homogeneity and the absence of maintenance windows, the aggregated schedules can be disaggregated to unit-level solutions using the integer decomposability property. Our analysis pinpoints the main sources of non-disaggregability in realistic models - ramp interactions, maintenance windows and heterogeneity - and suggests tractable approximation strategies.

A numerical study is proposed, which uses the European Resource Adequacy Assessment (ERAA) dataset (ENTSO-E harmonised projections). This publicly available dataset provides 10-year (typically 2025-2035 for current editions) projections of demand, installed capacities, renewable capacity factors and interconnection capacities, alongside multiple historical climate scenarios. It makes it possible to assess the future adequacy between the electricity supply and demand, to carry out studies on the flexibility of the European electricity system under meteorological variability, and to size the interconnections between European countries.

We built a Python platform implementing multi-zone aggregated unit commitment over one year. The platform is designed to serve as a compact test bench to evaluate aggregation/disaggregation schemes and could be used to address other research questions.

17:00
An Efficient and Tight Model for Nuclear Power Plant Constraints in Flexible Operation
PRESENTER: Francisco Labora

ABSTRACT. Nuclear power plants have historically been inflexible generators, designed for baseload operation, generating a steady output of high power at low variable cost. This conflicts with the flexibility demands that high renewable penetration impose over power systems. However, it is possible, as some countries like France demonstrate daily, to operate nuclear power plants in a more flexible manner to perform load-following operations. Due to the unique characteristics of nuclear reactions as heat sources, reactors are subject to certain constraints, such as Xenon transients, which require specific modeling to represent their behavior. We present a tight formulation for this constraint, which enhances existing literature formulations. We also present novel formulations for modeling the minimum stable time problem with more nuance. We finally provide a case study based on the RTS-GMLC to show the increase in computational efficiency of our improved formulation and the advantages presented by our novel formulations.

15:30-17:30 Session 14E: Non-convex continuous optimization
15:30
Delayed Feedback in Online Non-Convex Optimization: A Non-Stationary Approach with Applications
PRESENTER: Cristian Vega

ABSTRACT. We study non-convex online optimization problems with delay and noise by evaluating dynamic regret in the non-stationary setting when the loss functions are quasar-convex. In particular, we consider scenarios involving quasar-convex functions either with Lipschitz gradients or with weak smoothness, and in each case, we establish bounded dynamic regret in terms of cumulative path variation, achieving sub-linear rates. Furthermore, we illustrate the flexibility of our framework by applying it to both theoretical settings, such as zeroth-order (bandit), and practical applications with quadratic fractional functions. Moreover, we provide new examples of non-convex functions that are quasar-convex by proving that the class of differentiable strongly quasiconvex functions is strongly quasar-convex on convex compact sets. Finally, several numerical experiments validate our theoretical findings, illustrating the effectiveness of our approach.

16:00
Strongly Quasiconvex Functions: What We Know (so far)
PRESENTER: Sorin-Mihai Grad

ABSTRACT. Introduced by Polyak in 1966, the class of strongly quasiconvex functions includes some interesting nonconvex members, like the square root of the Euclidean norm or ratios with a nonnegative strongly convex numerator and a concave and positive denominator. In this talk we survey the most relevant examples of strongly quasiconvex functions and results involving them available in the literature at the moment. In particular, we recall some recent algorithms for minimizing such functions, and hint toward some directions where additional investigations would be welcome.

16:30
Toward L0 solution path via L0 Bregman relaxations
PRESENTER: Mhamed Essafri

ABSTRACT. We study optimization problems with L0 regularization, which are challenging due to their non-convex and discontinuous nature, making them NP-hard. Building on recent advances in L0 Bregman relaxations (B-rex), we propose L0PathBrex, an algorithm to efficiently compute solution paths across different sparsity levels. The method exploits B-rex to design effective warm-start strategies and progressively refine solutions. Experiments and comparisons with state-of-the-art methods, such as L0Learn, demonstrate the accuracy and efficiency of the proposed approach.

17:00
Optimization landscape of L0-Bregman relaxations

ABSTRACT. In this talk, we study (noisy) linear systems, and their L0-regularized optimization problems, coupled with non-quadratic fidelity terms. Recent successful approaches for solving this class of problems have proposed to consider non-convex exact continuous relaxations that preserve global minimizers while reducing the number of local minimizers. Within this framework, we consider the class of L0-Bregman relaxations, and establish sufficient conditions under which a critical point is isolated in terms of sparsity, meaning that any other critical point must have strictly larger cardinality, ensuring a form of uniqueness in the solution structure. Furthermore, we analyze the exact recovery properties of such exact relaxations. To that end, we derive conditions under which the oracle solution (i.e., the one sharing the same support as the ground-truth) is the unique global minimizer of the relaxed problem, and is isolated in terms of sparsity. Our analysis is primarily built upon a novel property we introduce, termed Bregman Restricted Strong Convexity. Finally, we specialize our general results to both sparse linear (least-squares) and Poisson ((generalized) Kullback–Leibler divergence) regression problems. In particular, we show that our general analysis sharpens existing bounds for the LS setting, while providing an entirely new result for the KL case.

15:30-17:30 Session 14F: Evolutionary algorithms and black-box optimization - part II
15:30
Improved Runtime Guarantees for the SPEA2 Multi-Objective Optimizer
PRESENTER: Milan Stanković

ABSTRACT. Together with the NSGA-II, the SPEA2 is one of the most widely used domination-based multi-objective evolutionary algorithms. For both algorithms, the known runtime guarantees are linear in the population size; for the NSGA-II, matching lower bounds exist. With a careful study of the more complex selection mechanism of the SPEA2, we show that it has very different population dynamics. From these, we prove runtime guarantees for the \oneminmax, \leadingonestrailingzeros, and \ojzj benchmarks that depend less on the population size. For example, we show that the SPEA2 with parent population size $\mu \ge n - 2k + 1$ and offspring population size $\lambda$ computes the Pareto front of the \ojzj benchmark with gap size~$k$ in an expected number of $O( (\lambda+\mu)n + n^{k+1})$ function evaluations. This shows that the best runtime guarantee of $O(n^{k+1})$ is not only achieved for $\mu = \Theta(n)$ and $\lambda = O(n)$ but for arbitrary $\mu, \lambda = O(n^k)$. Thus, choosing suitable parameters~-- a key challenge in using heuristic algorithms~-- is much easier for the SPEA2 than the NSGA-II.

16:00
Dynamic Algorithm Configuration: Challenges and Opportunities

ABSTRACT. When faced with an optimization problem, we want to utilize the most suitable optimizer for this particular problem. However, in the black-box scenario, this choice can be difficult to make as no or little information can be assumed about the problem at hand. State-of-the-art algorithm selection approaches sacrifice a fraction of their evaluation budget to collect information on the problem landscape, which is then used to select the appropriate solver for the given instance.

In per-run algorithm selection, this cost is reduced by running an initial (default) algorithm for a short evaluation budget, and then using its trajectory to calculate landscape features. This way, the traditional random sampling step of feature-based algorithm selection is replaced by an optimization algorithm.

Extending this per-run principle to a method that performs the selection at any stage of the optimization process, we can create a fully dynamic algorithm selection (DynAS) procedure.

In this talk, we will outline the general principles behind DynAS, starting from a data-driven motivation of its potential. From there, we will showcase how efforts to achieve this potential have had mixed results and outline several challenges towards achieving robust DynAS pipelines, among them (1) the choice of appropriate features to characterize the problem landscapes and (2) the design of effective warm-starting methods to transfer information from one algorithm to the next.

16:30
A Knowledge Management Open Science Approach for Black-Box Optimization Benchmarking

ABSTRACT. The black-box optimization research community has recently identified and made explicit in scientific fora (e.g. BeMCO 2024 [1], ROAR-NET COST Action [2]), the need for systematic, formal and open data (knowledge) management practices in this research field. Benchmarking data are generally not well structured, defined, or harmonized, for example, they suffer from the use of different taxonomies, nomenclatures, data representations, etc. This renders performance comparison and validation difficult or impossible, hindering the development and progress of this scientific area, as well as the corresponding potential for innovation and creation of economic value. Initiatives such as IOHprofiler [3], OPTION [4] and the Optimisation Problem Library [5], address some dimensions of this problem. In a different scale and abstraction level, EU Data Strategy policies, initiatives, infrastruc- tures, services and tools, such as the Research and Innovation Common European Data Space [6] and the European Open Science Cloud - OpenAIRE [7], are actively promoting research data management best practices. We propose a roadmap for black-box optimization research data management to integrate with the EU Data Strategy agenda, benefiting from and contributing to the EU open science large-scale infrastructures, services, tools, standardization processes, and best practices. Our key goal is to make a step forward towards FAIR (findable, accessible, interoperable and reusable) research results in the black-box optimization research domain.

17:00
Dynamic Algorithm Configuration for Next-Generation Optimization

ABSTRACT. The process of tuning parameters is called algorithm configuration (AC), and has been widely studied. In recent years, algorithm configuration has become a critical component in advancing the performance of optimization, machine learning, and AI systems. Traditionally, algorithms rely on static, one-size-fits-all configurations, which often fail to adapt to diverse problem landscapes or dynamic environments. This limitation has motivated research into dynamic algorithm configuration (DAC), where parameters, heuristics, or even whole algorithms are adapted online based on problem-specific features, search progress, or environmental feedback. Many works have shown that adjusting parameters during search can greatly improve performance [2, 1]. One application of DAC is my ongoing work that consists of applying the hyperconfigurable algorithm search for the combinatorial problem of multi port continuous berth allocation problem (MPCBAP). In this work, the following contributions are made: (1) we propose a dynamically configured ALNS method that we call hyper-configurable ALNS (HCALNS) and (2) we apply and evaluate the HCALNS algorithm on the MPCBAP and present its performance on a benchmark of instances from the literature. The results show that the HCALNS outperformed the ALNS algorithm and was able to define a new state-of-the-art for the large instances of MPCBAP. This demonstrates the effectiveness of using self-adjusted parameters to solve the problem. This adaptability leads to better performance, especially in complex or evolving scenarios, as the algorithm finetunes its behavior in real-time.

15:30-17:30 Session 14G: Practical OR problems
15:30
A technician routing and scheduling problem with a lexicographic objective
PRESENTER: Elise Bangerter

ABSTRACT. The operation and maintenance of electricity distribution networks involve numerous technical interventions that must be carried out daily, while minimizing the operational costs associated with their execution. The problem we address originates from a technician routing and scheduling problem at Électricité de France (EDF). Specifically, we consider a set of teams of technicians and a set of interventions. Each intervention is associated with a specific time window and a set of required skills, while each team has an individual skill profile. The problem consists in assigning teams of technicians to interventions and dispatching them on routes that fulfill skill requirements, respect time windows, and satisfy additional operational constraints such as mandatory lunch breaks and workload limits. The problem involves two objectives to be optimized in lexicographic order: the primary objective is to maximize the total duration of interventions covered; the secondary objective is to minimize the total operational cost, comprising both personnel and travel expenses. We propose two mathematical formulations for the problem under study: a compact formulation and a set packing formulation. For the latter, we design column generation-based algorithms that incorporate different strategies to address the lexicographic objective function. These algorithms are applied to benchmark instances provided by EDF, and their performance is evaluated through a comparative analysis to assess both solution quality and computational efficiency.

16:00
Optimizing a Worldwide-Scale Shipper Transportation Planning in a Carmaker Inbound Supply Chain
PRESENTER: Mathis Brichet

ABSTRACT. We study the shipper-side design of large-scale inbound transportation networks, motivated by Renault's global supply chain. We introduce the Shipper Transportation Design Problem, which integrates consolidation, routing, and regularity constraints. The resulting problem is a rich version of a multicommodity network design problem. We propose a tailored Iterated Local Search (ILS) metaheuristic to solve it. The algorithm combines large-neighborhood search with MILP-based perturbations and exploits bundle-specific decompositions and giant container bounds to obtain scalable lower bounds and effective benchmarks. Computational experiments on real industrial data show that the ILS achieves an average gap of 7.9% to the best available lower bound on world-scale instances with more than 700,000 commodities and 1,200,000 arcs, delivering solutions showing a potential of 23.2% cost reduction compared to the Renault-based benchmark. To our knowledge, this is the first approach to solve shipper-side transportation design problems at such scale, being orders of magnitude bigger than what usual resolution methods from the literature can solve. Our analysis further yields managerial insights: accurate bin-packing models are essential for realistic consolidation, highly regular plans offer the best balance between cost and operational stability, and outsourcing is only attractive in low-volume contexts, while large-scale networks benefit from in-house planning. The final section explores the network design perspectives emerging from our findings.

16:30
Integrating Revenue Management into the Supply Chain for New Vehicles’ Distribution at Renault Group
PRESENTER: Nicolas Corvol

ABSTRACT. Car dealers must daily decide which vehicles to purchase from the manufacturer to replenish their stock. This is an integrated sales and operations planning problem. On the sales side, dealers must anticipate customer choices. Operational costs include inventory and lot-sizing costs. We introduce a Markov Decision Process (MDP) that models this car dealer inventory replenishment problem.

Four aspects make this problem particularly challenging. First, the problem has a combinatorial nature. Any vehicle that can be produced can be ordered, leading to a catalog of over 103configurations from different option combinations. Second, the uncertainty is endogenous. We do not know what customers would have purchased if different vehicles had been offered. Third, the historical data for demand estimation is censored, which further complicates the problem. Finally, the problem is an MDP with combinatorial state and action spaces, a class of problems known to be difficult to solve.

The core contribution is a novel policy based on Combinatorial-Optimization Augmented Machine Learning (COAML). Numerical experiments show the performance of the approach.

17:00
MINimum CYcle bases with maximum INtersection in TEmporal GRaphs: ALgorithms and complexITY
PRESENTER: Dimitri Watel

ABSTRACT. This work is supported by the FMJH through the PGMO project MINCY-INTEGRALITY. A common problem in chemo- and bio-informatics consists in studying the positions of atoms constituting a molecule, which form a sequence of molecular graphs derived from these positions at discrete time steps. These graphs all share the same vertex set (which represents the set of atoms), but may vary in some edges, namely the ones associated with hydrogen bonds (which may appear or disappear over time), while others (such as covalent bonds) are persistent. Thus, such graphs may be represented by temporal graphs. As the structure of a molecule is essentially related to the interactions among elementary cycles within its associated graph, such a structure is often characterized by a minimum cycle basis of the graph, which is a concise representation of cycles. So, in order to evaluate the conservation of the molecular structure over time, a natural problem is to look for minimum cycle bases (one for each time step) that have as many cycles in common as possible [1]. Some preliminary results concerning the approximability and parameterized complexity of this problem have already been obtained [2], and the goal of this project is to complement them. In this presentation, we introduce the problem, the existing results and the questions we aim to answer.

15:30-17:30 Session 14H: Optimization under uncertainty
15:30
Lot-sizing Problem Under Decision-Dependent Uncertainty: A Probing-Enhanced Stochastic Programming Approach

ABSTRACT. We address the Multi-Item Capacitated Lot-Sizing Problem (MCLSP) under decision-dependent uncertainty through a new probing-enhanced stochastic programming framework. In this setting, the demand is strongly correlated with another random vector, and the decision-maker can strategically acquire partial information about uncertain demand by selecting a component of the correlated random vector to probe, conditioning production decisions on observed covariates. This approach generalizes classical stochastic models by embedding endogenous information acquisition within a three-stage optimization framework. We introduce a compact reformulation that eliminates traditional non-anticipativity constraints, resulting in a stronger linear relaxation and improved computational tractability. We extend classical (k,U) inequalities to the decision-dependent setting and introduce a new class of value-function-based inequalities that strengthen the LP relaxation by capturing the structural relationship between probing and expected recourse costs. These enhancements are embedded within a branch-and-cut algorithm, which also integrates a primal heuristic to accelerate convergence. Computational experiments demonstrate the effectiveness of our proposed methodology. The new formulation, together with value-function inequalities, consistently outperforms classical approaches, reducing optimality gaps by up to 85%. The full-featured branch-and-cut algorithm achieves optimality gaps below 1.5% on average, even under tight capacity constraints and large scenario sets. These results highlight the critical role of structured reformulation, valid inequalities, and heuristic guidance in solving complex decision-dependent stochastic programs. Our findings display the value of proactive information acquisition and offer scalable tools for production planning under uncertainty.

16:00
Stochastic Revelation in Sequential Decision-Making with a Pharmaceutical Application
PRESENTER: Morteza Davari

ABSTRACT. In many real-life situations, such as medical product launches, energy investments, or the rollout of new policies, decision-makers must act before knowing exactly when critical information will become available. We develop new mathematical models that incorporate uncertainty about what will happen and when that uncertainty will resolve. Traditional decision-making tools assume fixed timelines for information revelation; instead, we address more realistic scenarios where information arrives at unpredictable moments, making planning more complex and costly if mishandled.

In this work, we focus on a pharmaceutical application, in particular, the planning and production of medical devices, where firms must be mindful of regulatory approvals. This context has long been recognized as a challenge due to uncertain approval dates, which complicate inventory and production planning decisions (Hill and Sawaya, 2004). The change in demand resulting from regulatory uncertainty is closely connected to that of optimal product rollover, where the timing of phasing out old products and introducing new ones significantly affects demand and, as such, the firm's performance (Lim and Tang, 2006).

Our first contribution is to formulate the problem as a multistage stochastic programming problem (Birge and Louveaux, 2011; Shapiro et al., 2014). We can solve moderate instances using the extensive form of the problem, which provides us with insights into both the solution and how to develop efficient algorithms. We then propose decomposition methods that enable us to solve larger instances, such as those involving more countries and extended time periods, and explore the managerial insights that can be derived from the solutions.

16:30
Learning Data-Driven Uncertainty Set Partitions for Robust and Adaptive Forecasting with Missing Data

ABSTRACT. Short-term forecasting models typically assume the availability of input data (features) when they are deployed and in use. However, equipment failures, disruptions, cyberattacks, may lead to missing features when such models are used operationally, which could negatively affect forecast accuracy, and result in suboptimal operational decisions. In this work, we use adaptive robust optimization and adversarial machine learning to develop forecasting models that seamlessly handle missing data operationally. We propose linear- and neural network-based forecasting models with parameters that adapt to available features, combining linear adaptation with a novel algorithm for learning data-driven uncertainty set partitions. The proposed adaptive models do not rely on the identification of historical missing data patterns and are suitable for real-time operations under stringent time constraints. Extensive numerical experiments on an application of short-term wind power forecasting considering horizons from 15 minutes to 4 hours ahead illustrate that our proposed adaptive models are on par with imputation when data are missing for very short periods (e.g., when only the latest measurement is missing), whereas they significantly outperform imputation when data are missing for longer periods. We further provide insights by showcasing how linear adaptation and data-driven partitions (even with a few subsets) approach the performance of the optimal, yet impractical, method of retraining for every possible realization of missing data.

17:00
Optimal Operation of a Battery System Considering Uncertainties on Energy Prices: a Least-Square Monte Carlo Approach
PRESENTER: Asja Alic

ABSTRACT. Battery Energy Storage Systems (BESS) are expected to play a key role in the fight towards climate change. Nevertheless, despite their flexibility and decreasing investment costs, BESS still face high revenue risks. On the one hand, these assets are characterized by inevitable capacity degradation phenomena pursuant to frequent cyclic actions. If not properly considered, this loss in energy capacity may impede the assets to effectively operate in the energy markets and ultimately effect their financial viability. On the other hand, BESS that rely on merchant revenues (e.g. arbitrage in the Day-Ahead energy markets) are also subject to market uncertainties, which introduce further challenges to their optimal operation and scheduling. Within this context, this work extends a deterministic scheduling model –developed by EDF researches - to a stochastic framework in order to account for uncertainties on energy prices. The original model, formulated with a Dynamic Programming–Mixed Integer Linear Programming approach, associates a “cost” c∈C for each optimal charging/discharging cycle to maximize the revenues while accounting for non-linear the cyclic degradation.

When accounting for uncertainties on the Day-Ahead energy prices, it is possible to formulate the Bellman Equation of the optimization problem at a generic time-step t, for a generic value of the energy capacity E_t and at energy price profile λ [€/MWh] as in (1): V(t,E_t,λ )=max┬(c∈C)⁡{R(t,E_t,p_t^* (c,λ),λ)+E[V(t+1,f(t,E_t,p_t^* (c),λ_(t+1) )|λ_t=λ)]} (1) Subject to: f(t,E_t,p_t^* (c))=E_t∙e^(-k∙N_t^eq ) (2) λ_(t+1)~〖Pr〗_(t+1) (∙|λ_t) (3) 〖p_t^* (c_t )=arg max┬(p_t∈P)〗⁡{λ_t∙(p_t^d-p_t^c )∙∆t-c∙N_t^eq } (4) N_t^eq=(〖(p〗_t^d+p_t^c)∙∆t)⁄((2∙E ̂ )) (5)

In (1), R(∙) [€] indicates the immediate arbitrage revenues. Notably, E[∙| λ_t=λ] denotes the Conditional Expected Value, where λ_t is a stochastic process modelling the energy price, while f(t,E_t,p_t^* (c)) indicates the degraded energy capacity E_(t+1) at the next stage pursuant to cyclic actions. Finally, 〖Pr〗_(t+1) (∙|λ_t) in (3) is the conditional probability measure of the random variable λ_(t+1) while the variable p_t^* (c) [MW] in (1) is the optimal control variable obtained from the resolution of the problem in (4) subject to standard operational constraints (e.g. energy balance, power limits) and to (5).

In order to compute E[∙], the Longstaff-Schwartz method (known also as Least-Square Monte Carlo approach) [1] is adopted. Specifically, E[∙] can be approximated with a linear function of R[t](λ), which indicates the maximum revenues that a BESS without degradation can obtain at time-step t at a given price λ.

Simulations have been conducted over a representative planning horizon of ten years with monthly stages and an hourly resolution. The energy prices profiles have been obtained from a One-Factor model implemented by researches from EDF and consider the Italian Day-Ahead energy prices.

15:30-17:30 Session 14I: Time-varying optimization
15:30
A Sequential Bayesian Approach to Gaussian Process Quantile Regression for Optimization
PRESENTER: Hugo Nicolas

ABSTRACT. Quantile regression extends the classical least-squares regression to the estimation of conditional quantiles of a response variable. In the frequentist approach, the quantile regression problem is cast as the minimization of a loss function, possibly complemented with regularization terms. The Bayesian counterpart formulates the problem as posterior inference over a function space. Of particular interest is Gaussian process quantile regression, which formulates the regression problem as posterior inference over the latent conditional quantiles, where prior knowledge is encoded in the form of a Gaussian process.

Existing approaches to Gaussian process quantile regression either perform the regression directly on observed data or resort to sparse approximations to mitigate computational costs. However, in the latter case, the approximation is typically defined over a small set of latent auxiliary variables that act as compact representations of the quantile, but whose locations are fixed in advance, ultimately resulting in suboptimal predictive performance. In this work, we introduce an adaptive strategy that exploits the Gaussian process predictive variance to infill the set of auxiliary variable locations. Inference of the posterior distribution over these auxiliary variables is recast as its Laplace approximation. The impact of finite training data on the auxiliary variables is estimated through bootstrap resampling. Building on this, we introduce an active learning strategy that acquires new observations of the response variable via rejection sampling, with the sampling density guided by the uncertainties in the auxiliary estimates. Our algorithm combines adaptive auxiliary variable allocation and active learning, leading to a sequential approach that ensures a rich and well-balanced representation of the quantile function.

Finally, we extend our quantile regression method and its enrichment criteria to the quantile minimization problem within a Bayesian optimization framework.

16:00
No-Regret Gaussian Process Optimization of Time-Varying Functions

ABSTRACT. We study sequential time-varying optimization, where the underlying reward function evolves over time and is assumed to lie in a Reproducing Kernel Hilbert Space (RKHS). While classical Gaussian Process Upper Confidence Bound (GP-UCB) algorithms achieve sublinear regret in stationary settings, such guarantees typically fail when the reward function drifts unpredictably, creating both theoretical and algorithmic challenges.

To address this, we propose SparQ-GP-UCB, a novel framework that handles time variations via uncertainty injection (UI), which increases the noise variance of past observations to account for drift. To mitigate the resulting information loss, we introduce a sparsification and querying mechanism that strategically updates a limited set of past observations, effectively restoring no-regret guarantees. This leads to a sparse inference problem under heteroscedastic noise, which we solve by extending sparse GP methods to handle time-varying uncertainty.

We prove that no-regret is achievable under this framework, provided a logarithmic number of side queries per time step. This offers a concrete solution to the open problem of achieving no-regret in time-varying settings. Extensive experiments on synthetic and real-world datasets demonstrate that SparQ-GP-UCB consistently outperforms existing baselines, achieving substantially lower cumulative regret.

Building on this, we introduce W-SparQ-GP-UCB, which employs a windowing strategy to reduce the number of queries per iteration from O(log(T)) to o(1). We further establish near-optimality by providing a lower bound on the queries required for no-regret.

These results highlight both the theoretical rigor and practical effectiveness of our approach. By requiring minimal expert input, our method bridges the gap between idealized static optimization and real-world time-varying systems, making it broadly applicable to scenarios where continual adaptation is essential.

16:30
Control-Based Design of Online Optimization Algorithms

ABSTRACT. We propose a control theory-based approach to the design of online optimization algorithms, with the goal of improving the tracking of the solution trajectory (trajectories) w.r.t. state-of-the-art methods. We focus first on quadratic problems with a time-varying linear term, and use digital control tools (a robust internal model principle) to propose a novel online algorithm that can achieve zero tracking error by modeling the cost with a dynamical system. We prove the convergence of the algorithm for both strongly convex and convex problems. We further discuss the sensitivity of the proposed method to model uncertainties and quantify its performance. We discuss how the proposed algorithm can be applied to general (nonquadratic) problems using an approximate model of the cost, and analyze the convergence leveraging the small gain theorem. We present numerical results that showcase the superior performance of the proposed algorithms over previous methods for both quadratic and nonquadratic problems.

17:00
A kite-based generator for airborne wind energy: modelling and optimisation

ABSTRACT. Using kites to collect wind power and generate energy has been intensively studied in the last decade, see e.g. the survey by M. Diehl et al in [1]. In the framework of the KEEP (Kite Electrical Energy Power) funded by CNRS and gathering researchers from ENSTA Bretagne (well acquainted with the topic after previous studies on kites [2], most notably for boats [3]) and Université Côte d’Azur, we are interested in the analysis of a simple device composed of a kite attached to an arm; having the kite running along a well chosen curve will move the arm and generate electric power. We first build a simple point-mass mechanical model where the kite motion is prescribed to a conical surface modelled on an eight curve. The resulting differential equation can be expressed either as (i) a 5-dimensional second order DAE, or (ii) a dimension 2 second order ODE. For well chosen initial conditions, numerical integration of these two equivalent descriptions exhibit a limit cycle. We report on the optimization of the parameters of the device to maximize power on the limit cycle, as well as a validation of our approach using a third party trusted library

15:30-17:30 Session 14J: Machine Learning for Optimization
15:30
A Learning-Based Approach for Traffic State Reconstruction from Limited Data
PRESENTER: Nail Baloul

ABSTRACT. We propose an efficient method for reconstructing traffic density with low penetration rate of probe vehicles, relying solely on the initial and final positions of a small subset of cars. Due to the scarcity of collected data, we generate artificial trajectories using microscopic dynamical systems and design a machine learning model to approximate the traffic density. Unlike PINN approaches that rely on real-time measurements, our method reduces data needs thus simplifying the collection process. The learning process is framed as a constrained optimization problem, combining observed positions with traffic flow principles. We show that, when trained only on synthetic data, the approximation converges to the LWR macroscopic model as vehicle count grows.

16:00
Code Evolution Graphs: Understanding Large Language Model Driven Design of Algorithms

ABSTRACT. Large Language Models (LLMs) have demonstrated great promise in generating code, especially when used inside an evolutionary computation framework to iteratively optimize the generated algorithms. However, in some cases they fail to generate competitive algorithms or the code optimization stalls, and we are left with no recourse because of a lack of understanding of the generation process and generated codes. We present a novel approach to mitigate this problem by enabling users to analyze the generated codes inside the evolutionary process and how they evolve over repeated prompting of the LLM. We show results for three benchmark problem classes and demonstrate novel insights. In particular, LLMs tend to generate more complex code with repeated prompting, but additional complexity can hurt algorithmic performance in some cases. Different LLMs have different coding ``styles'' and generated code tends to be dissimilar to other LLMs. These two findings suggest that using different LLMs inside the code evolution frameworks might produce higher performing code than using only one LLM.

16:30
Deep Active Learning-driven Acceleration of Smart Grids Security Assessment
PRESENTER: Alban Marie

ABSTRACT. The reliable and efficient operation of smart grids is contingent on fast and accurate secu- rity assessment. Performing such an assessment is increasingly difficult because of the growing complexity and uncertainty of modern power grids. As traditional power flow simulations based on iterative methods are computationally intensive, machine learning (ML)-based approaches have attracted attention. However, such models typically require large amounts of labelled data, which is time-consuming to acquire via power flow results oracles. To mitigate this, we propose a deep active learning (DAL)-driven framework [1] that actively selects the most informative operational points for labelling, thereby reducing reliance on exhaustive iterative methods-based power flow computations. To the best of the authors’ knowledge, deep active learning has never been applied to this type of problem, although recent and rare works emerge on the application of this method to power systems problems [2]. We evaluate multiple DAL query strategies - includ- ing Monte Carlo dropout and batch active learning by diverse gradient embeddings (BADGE) - on a binary classification task to detect congestion in a low voltage network (namely the IEEE European low voltage test network (ELVTN)). Results show that DAL methods signif- icantly reduce the number of training labelled samples required over the random baseline on the considered security assessment dataset. Our findings suggest that deep active learning is a promising avenue for accelerating the training of ML-models for smart grid applications, by reducing dependence on costly labelling requiring power flow results from iterative methods [3].

References

[1] P. Ren et. al., “A Survey of Deep Active Learning”, ACM Computing Surveys, 2021.

[2] Z. Shi et al., “Intelligent Power System Stability Assessment and Dominant Instability Mode Identification Using Integrated Active Deep Learning”, IEEE Transactions on Neural Networks and Learning Systems, 2024.

[3] A. Marie et al., “Deep Active Learning-driven Acceleration of Smart Grids Security Assess- ment”, IEEE ISGT Europe Int. Conf., Malta, Oct. 2024.

17:00
Model-Based Reinforcement Learning for Exact Combinatorial Optimization
PRESENTER: Paul Strang

ABSTRACT. Modern mixed-integer linear programming (MILP) solvers are built upon the branch-and-bound (B&B) paradigm. Since the 1980s, considerable research and engineering effort has gone into refining these solvers, resulting in highly optimized systems driven by expert-designed heuristics tuned over large benchmarks. However, in operational settings where structurally similar problems are solved repeatedly, adapting solver heuristics to the distribution of encountered MILPs can lead to substantial gains in efficiency, beyond what static, hand-crafted heuristics can offer. Recent research has thus turned to machine learning (ML) to design efficient, data-driven B&B heuristics tailored to specific instance distributions The variable selection heuristic, or branching heuristic, plays a particularly critical role in B&B overall computational efficiency, as it governs the selection of variables along which the search space is recursively split. A key milestone was achieved by Gasse et al., who first managed to outperform human-expert branching heuristics by learning to replicate the behaviour of a greedy branching expert at lower computational cost. While subsequent works succeeded in learning efficient branching strategies by reinforcement, none have yet matched the performance achieved by imitation learning (IL) approaches. This trend extends beyond MILPs to combinatorial optimization (CO) problems at large, as reinforcement learning (RL) baselines consistently underperform both handcrafted heuristics and IL methods trained to replicate expert strategies across various CO benchmarks. Yet, if the performance of IL heuristics are capped by that of the experts they learn from, the performance of RL agents are, in theory, only bounded by the maximum score achievable. RL agents have only managed to surpass human expert heuristics in a restricted subset of combinatorial problems: board games. In fact, AlphaZero (Silver et al.) reached superhuman performance in go, chess and shogi by leveraging environment simulators to perform model-based planning via Monte Carlo Tree Search (MCTS). Inspired by the work of Schrittwieser et al. we seek to extend the use of AlphaZero-like agents beyond board games to exact combinatorial optimization. To that end, we introduce Plan-and-Branch-and-Bound (PlanB&B), a model-based reinforcement learning (MBRL) agent that leverages an internal model of the B&B dynamics to learn improved variable selection strategies. Our computational results suggest that the branching dynamics in B&B can be approximated with sufficient fidelity to enable policy improvement through planning over a learned model, opening the door to broader applications of MBRL to mixed-integer linear programming.