PGMODAYS 2023: PGMODAYS 2023
PROGRAM FOR TUESDAY, NOVEMBER 28TH
Days:
next day
all days

View: session overviewtalk overview

08:30-09:00Welcome coffee
09:20-10:10 Session 2: Plenary 1: Philippe Drobinski
Location: Auditorium
09:20
Integration of climate variability and climate change in renewable energy planning

ABSTRACT. The trajectory outlined in the Paris Agreement to keep global warming below 2 degrees C dictates not only the timing but also the speed at which the transformation of our energy system must take place to decarbonize energy production. Complying with the Paris Agreement requires reducing the carbon content of energy by about 75% and therefore making a rapid transition from fossil production to production based on low-carbon technologies. Among these technologies are those based on renewable energies. The variability of the climate itself induces a fluctuating or even an intermittent production of variable renewable energy (solar, wind, marine), challenging the balance of the electricity grid. In this context, to speak of energy transition is to face the problem of increasing the pene- tration of low-carbon energy production while limiting the variability so as to ensure the socio-technical feasibility and economic viability. The problem is not simple, and the delicate balance between urgency (drastically reducing emissions) and utopia (choosing a strategy for low carbon energies and analyzing opportunities and obstacles) needs to be clearly defined.

10:50-11:20Coffee Break
11:20-12:50 Session 5A: Invited session: Hierarchical Optimization in Energy Systems, organized by Mathias Staudigl
Location: Amphi I
11:20
A regularized variance-reduced modified extragradient method for stochastic hierarchical games
PRESENTER: Mathias Staudigl

ABSTRACT. The theory of learning in games has so far focused mainly on games with simultaneous moves. Recently, researchers in machine learning have started investigating learning dynamics in games involving hierarchical decision-making. We consider an N-player hierarchical game in which the ith player’s objective comprises of an expectation-valued term, parametrized by rival decisions, and a hierarchical term. Such a framework allows for capturing a broad range of stochastic hierarchical optimization problems, Stackelberg equilibrium problems, and leader- follower games. We develop an iteratively regularized and smoothed variance-reduced modified extragradient framework for learning hierarchical equilibria in a stochastic setting. We equip our analysis with rate statements, complexity guarantees, and almost-sure convergence claims. We then extend these statements to settings where the lower-level problem is solved inexactly and provide the corresponding rate and complexity statements.

11:50
Control Strategies for Transport Networks under Demand Uncertainty with extensions to Bilevel Optimization

ABSTRACT. We consider transport networks with uncertain demands at the consumer nodes. Network dynamics are given by hyperbolic partial differential equations with linear flux functions and (non-)linear damping terms. At the network junctions suitable coupling conditions are discussed. Demands can be incorporated deterministically or stochastically. One way to model demand uncertainty is to use solutions to suitable stochastic differential equations, as for example Jacobi or Ornstein-Uhlenbeck processes. For the demand satisfaction, we solve a constrained optimal control problem. Controls in terms of network inputs are then calculated explicitly for different assumptions. Extending the model, these explicit inflow controls can be used to set up a bilevel optimization framework. The inner problem consists of solving the aforementioned control problem, whereas the outer problem is concerned with reconstructing desired demand levels from past data. Existence of controls in Lebesgue spaces is ensured when interpreting the hyperbolic conservation law in the weak sense. Some numerical experiments are presented to visualize various features of the model including potential updating strategies in the control problem and strategies of how to observe the network in order to obtain good reconstructions of the desired demand in the bilevel framework.

12:20
Advanced Optimization Techniques for Grid Design and Real-Time Operations in Power Networks
PRESENTER: Alessandro Zocca

ABSTRACT. In transmission power networks, power flows and network topology are deeply intertwined due to power flow physics. With increasing energy demands and the rapid integration of volatile renewable energy sources like wind and solar, ensuring safe and reliable operation of these networks has grown increasingly complex. A crucial yet underexplored avenue for enhancing network robustness lies in the strategic usage of line-switching and bus-switching actions. These actions enable remote activation or deactivation of transmission lines and local adjustments to network topology but come with the challenge of identifying optimal configurations within an exponentially large search space.

To navigate this complex landscape, advanced mathematical tools are needed to assist decision-makers in the real-time identification of robust and resilient network configurations. In this presentation, I will outline two state-of-the-art approaches: one geared towards grid design and planning, and the other aimed at real-time grid operations and control. The first approach utilizes a mixed-integer linear programming (MILP) framework, drawing insights from recent tree partition literature. Its twofold objective is to alleviate network congestion while at the same time creating a more hierarchical structure that enhances the system’s resilience against cascading failures. The second approach focuses on scalability and employs multi-agent reinforcement learning (MARL) techniques to effectively navigate the combinatorial action space. This new model-free approach is inspired by the 2019 “Learning to Run a Power Network” challenge, initiated by the French network operator RTE to get researchers from different fields involved with the task of using reinforcement learning in the context of power grid control. Together, these approaches offer complementary pathways for exploiting the intrinsic flexibility of modern electrical infrastructures, thereby facilitating a transition towards more sustainable and resilient power systems.

11:20-12:50 Session 5B: Invited session: Algorithms in Generalized Convexity. Applications to Machine Learning and Sparse Optimization, part 1, organized by Michel De Lara
Location: Amphi II
11:20
Algorithms in Generalized Convexity
PRESENTER: Michel De Lara

ABSTRACT. In generalized convexity, the duality product is replaced by a coupling c, the Fenchel conjugate is replaced by the conjugation associated to the coupling, and closed convex functions are replaced by c-convex functions (defined as functions that are equal to their biconjugates). This field has been mostly the subject of theoretical works and, more rarely, of algorithmic ones . Recent work on the l0 pseudonorm has highlighted a so-called E-Capra coupling and conjugation which could lead to the application of generalized convex algorithms to sparse optimization.

In this context, we intend to develop new algorithms in generalized convexity from both theoretical and numerical standpoints. Here, we focus on the theoretical perspective and we present: how to define, in generalized convexity, Bregman distance and proximal operator; how to construct and analyze dual problems; how to design sparsity-inducing regularizers.

11:50
Mirror Descent Algorithm in Generalized Convexity
PRESENTER: Adrien Le Franc

ABSTRACT. In this talk, we present algorithmic perspectives based on mathematical tools of generalized Fenchel-Moreau conjugacies induced by one-sided linear (OSL) couplings.

We start with a generalization of the notion of Bregman divergence with OSL couplings, a first step before we also extend the mirror descent algorithm with OSL couplings. This contribution is an original new step in the algorithmic toolbox of general convexity, dominated so far by the cutting angle algorithm.

We also comment on the possibility to apply our generalized mirror descent algorithm to parse optimization problems that display Capra-convex properties. In particular, we consider problems displaying the l0 sparse pseudonorm as an objective function. Numerical perspectives are encouraged by the knowledge of a closed-form expression for the Capra-subdifferential of l0, which is requested for both the cutting angle and the mirror descent algorithm.

12:20
Abstract Cutting Plane Method applied to Sparse Optimization

ABSTRACT. In usual convexity, a function is closed convex if and only if it is the supremum of its affine minorants. In abstract convexity, affine functions are replaced by other elementary functions belonging to some set H. By definition, a function is abstract H-convex if it is the supremum of its H-minorants. Few abstract H-convex minimization algorithms have been studied, such as the abstract cutting plane method and the abstract branch- and-bound method. Recent work on the l0 pseudonorm has highlighted elementary functions that makes the l0 pseudonorm abstract H-convex — the so-called E-CAPRA affine functions. As explicit formulas for the E-CAPRA subdifferential of the l0 pseudonorm have been calculated, it has been made possible to implement abstract H-convex minimization algorithms to the special case of the l0 pseudonorm. We present our numerical tests on the E-CAPRA cutting plane method applied to the following sparse optimization problems: the minimization of the l0 pseudonorm in a blunt closed cone, and the computation of the spark of a matrix (known to be a NP-hard problem).

11:20-12:50 Session 5C: Black box optimization
Location: A1.116
11:20
Per-Class Algorithm Selection for Black-Box Optimisation

ABSTRACT. Algorithm selection works on the premise that no single algorithm is the best on each individual instance of a problem. With this in mind, the aim is to choose the most suitable algorithm for a problem, out of a set of available algorithms. Most algorithm selection approaches use precomputed features to predict the most suitable algorithm. However, this both requires experts to design high-quality features, and incurs a cost to compute these features for each problem instance before being able to make a prediction.

Feature-free algorithm selection approaches use models like artificial neural networks to predict the most suitable algorithm, with the raw problem instance description as input. Although this avoids the need for expert designed features and their computation, a specific description for each problem instance is still needed.

In black-box optimisation neither features, nor a raw problem instance description may be available a priori. Information about the specific problem instance may only be obtained by evaluating possible solutions to it, which may be computationally expensive.

Recognising that we cannot make predictions without any cost on a per-instance basis, we take a step back and propose an algorithm selector that makes predictions on a per-class basis. Our proposed method predicts the most suitable algorithm for a class of continuous black-box optimisation problems, using only the basic properties of this class. Specifically, we take into account only the number of decision variables, and the available evaluation budget. This requires neither instance-specific information, nor any expensive computation when applying the selector.

11:50
Expected Runtime of the Compact Genetic Algorithm on the LeadingOnes Benchmark

ABSTRACT. Estimation-of-distribution algorithms (EDAs) are iterative methods that heuristically solve optimization problems by maintaining an explicit probabilistic model of the search space. More precisely, in each iteration an EDA randomly samples some potential solutions according to its probabilistic model, evaluates their objective-function value (referred to as \emph{fitness}), and updates the model so that it has higher probability of sampling a solution with better fitness.

Over the past years the performance of EDAs has been studied extensively via theoretical means. The results suggest that EDAs can have an advantage over other randomized search heuristics in particular when coping with local optima or with noise.

The compact genetic algorithm (cGA) is one of the simpler EDAs. In each iteration it samples only two solutions. Despite its simplicity, there is still no rigorous result on its performance on the classic LeadingOnes benchmark, one of the most-studied benchmarks in the theory of randomized search heuristics.

In this work, we study the performance of the cGA on LeadingOnes. The effectiveness of the cGA highly depends on its sole parameter $\mu$, the so-called hypothetical population size. We prove that when $\mu = \omega(n \log^2n)$, then with high probability the cGA samples an optimal solution in $\mathcal{O}(\mu n \log n)$ iterations.

This work is part of the PGMO-funded project \emph{Mathematical Analysis of Complex Randomized Search Heuristics} (PI: Benjamin Doerr).

12:20
Estimation-of-Distribution Algorithms for Multi-Valued Decision Variables
PRESENTER: Martin S. Krejca

ABSTRACT. With apparently all research on estimation-of-distribution algorithms (EDAs) concentrated on pseudo-Boolean optimization and permutation problems, we undertake the first steps towards using EDAs for problems in which the decision variables can take more than two values, but which are not permutation problems. To this aim, we propose a natural way to extend the known univariate EDAs to such variables. Different from a naive reduction to the binary case, it avoids additional constraints.

Since understanding genetic drift is crucial for an optimal parameter choice, we extend the known quantitative analysis of genetic drift to EDAs for multi-valued variables. Roughly speaking, when the variables take $r$ different values, the time for genetic drift to become critical is $r$ times shorter than in the binary case. Consequently, the update strength of the probabilistic model has to be chosen $r$ times lower now.

To investigate how desired model updates take place in this framework, we undertake a mathematical runtime analysis on the $r$-valued LeadingOnes problem. We prove that with the right parameters, the multi-valued UMDA solves this problem efficiently in $O(r\log(r)^2 n^2 \log(n))$ function evaluations.

Overall, our work shows that EDAs can be adjusted to multi-valued problems and gives advice on how to set their parameters.

11:20-12:50 Session 5D: "Fairness"
Location: A1.122
11:20
Proportional Fairness for Multi-Objective Optimization
PRESENTER: Minh Hieu Nguyen

ABSTRACT. This work deals with a particular case of Multi-Objective Optimization called \textit{Bi-Objective Discrete Optimization} (BODO), where the feasible set is discrete, and the two objectives take only positive values. Since the feasible set of a BODO problem is discrete and usually finite, it can theoretically be enumerated to identify the Pareto set, which consists of all Pareto-optimal solutions representing different trade-offs between two objectives. However, in general, this problem is algorithmically unsolvable. From a practical point of view, the Central Decision Maker (CDM) may be interested in a reduced Pareto set reflecting the own preference of the CDM, which a computationally tractable algorithm can obtain.

In this work, we propose a novel selection criterion for BODO that can efficiently guide the Weighted Sum Method to find the preferred Pareto-optimal solutions achieving some fair competition between the two objectives based on proportional fairness. The latter aims to provide a compromise between the utilitarian rule - which emphasizes overall system efficiency, and the egalitarian rule - which emphasizes individual fairness. For our purpose, we consider a more general version of proportional fairness to take into account the relative importance of one objective to the other according to the point of view of the CDM. More precisely, we introduce the concept of the \textit{generalized Nash Fairness} solution, i.e., \textit{$\rho$-$NF$} solution, for BODO where $\rho > 0$ is a factor denoting the relative importance of the first objective comparing to the second one. This allows the CDM to consider that $\rho$ percent change of the first objective is comparably equivalent to a one percent change of the second one. Hence, when switching from a $\rho$-$NF$ solution to any other feasible solution, the sum of the factor $\rho$ of the proportional change in the first objective and the proportional change in the second objective is not negative.

We first introduce the concept and discuss the existence of $\rho$-$NF$ solution for BODO. We then show that the $\rho$-$NF$ solution set is a subset of the Pareto set. We also propose a recursive Newton-like algorithm for determining the $\rho$-$NF$ solution set. Finally, an illustrative example of BODO is given.

11:50
Accommodating fairness in a shared-energy allocation problem with uncertainties
PRESENTER: Zoé Fornier

ABSTRACT. With the recent emergence of prosumers (who are both energy producers and consumers), some companies propose to aggregate prosumers in the energy market. This is because prosumers are generally too small to directly enter the electricity market themselves. To implement those aggregators, there is a practical need to study how those who compose the portfolio, based on their assets and production/consumption profiles, are going to be treated fairly. We introduce and discuss ways to accommodate fairness by design in the modeling of the problem. Following this paradigm, the allocation decided by the aggregator has to be fair at the cost of an objective deterioration. This problem translates to a shared-resource fair allocation problem and can be modeled as a mixed-integer linear program in the deterministic case. First, we revisit conventional approaches to accommodate fairness including, e.g., the minimax approach, favoring the least well-off agent and the proportional approach, derived from Nash’s bargaining problem. We also examine measuring the proportional savings of agents instead of their costs. There, we minimize the agent with least-proportional savings, or we add equal proportional savings constraints. Those different approaches are compared based on application case studies of limited size, yet respecting realistic features of assets to be aggregated in electricity markets. Importantly, we eventually take into consideration the uncertainties in the energy market: in prices or energy loads. Accommodating fairness in stochastic resource allocation problems has been less studied in the literature. As a basis, we extend the previous approaches and related investigations to a two-stage stochastic program, by rethinking the previous fairness constraints or objectives in an uncertain case. Using the same case studies, we finally discuss the specifics of the stochastic setup compared to the deterministic one.

12:20
Optimal Allocation for a Fair Distribution of Green Energy in Microgrids
PRESENTER: Natalia Jorquera

ABSTRACT. An energy collective is a community of energy system actors who decide to operate in a collaborative manner. In this way they allow to facilitate the production and distribution of renewable energy. Among the various types of small-scale energy collectives, microgrids have gained prominence. In these networks, multiple users share a distributed energy resource (DER) with the aim of establishing a semi-autonomous system that can operate both connected to the main power grid and in isolation. Since DERs offer more affordable energy, but may not be able to fully meet the demand of all users, they will therefore compete with each other for these cost-effective DER energy resources, particularly during peak demand hours. Consequently, there is a need of considering fairness allocation among smart homes that share the DER in the microgrid. In this work, we investigate a community consisting of different houses, a shared DER, and a common energy storage system. Each house has an energy demand to satisfy over a discrete planning horizon. The demand can be fulfilled either by using the DER, the battery, or by purchasing electricity from the main power grid. Excess energy can be stored in the battery or sold back to the main grid. The objective is to find a supply plan that provides a fair allocation of renewable energy while minimizing the total cost of the microgrid. We formulate the problem as a mixed-integer linear programming model, considering various fairness metrics such as the proportional allocation rule and the min-max fairness. We evaluate the obtained predictive models using real instances with up to 7 houses and a one-day time horizon with 15-minute time intervals. The data used for these instances are sourced from E4C (Interdisciplinary center Energy4Climate of Institut Polytechnique de Paris and École des Ponts ParisTech) and pertains to a smart building located in France.

11:20-12:20 Session 5E: Efficient and Electric Mobility
Location: A1.128
11:20
Decentralized Smart Charging of Large-Scale EV Fleets using Adaptive Multi-Agent Multi-Armed Bandits
PRESENTER: Sharyal Zafar

ABSTRACT. As solar photovoltaics (PVs) and electric vehicles (EVs) integrate into electrical grids, challenges like network congestion and peak demand emerge. Traditional solutions require costly infrastructure investments. However, smart EV charging offers an elegant alternative. Existing solutions include centralized, hierarchical, and decentralized systems. However, scalability and real-time operation may pose challenges for centralized and hierarchical architectures. To address these issues comprehensively, we propose a fully decentralized smart charging system, ensuring scalability, real-time operation, and data privacy. In recent years, decentralized smart charging solutions utilizing standard reinforcement learning (RL) algorithms have gained prominence. However, the integration of multi-level constraints and objectives, encompassing both prosumer-specific local considerations as well as broader network-global considerations, remains a complex challenge. This complexity arises from the absence of predefined reward signals in the majority of smart-grid applications. Furthermore, standard RL algorithms employing deep learning-based approximations may exhibit slower convergence when compared to the multi-armed bandit class of RL algorithms, especially in scenarios where a perfect oracle is unavailable. Thus, we introduce a novel adaptive multi-agent system employing combinatorial 2-armed bandits with Thompson Sampling to minimize EV charging costs, considering variable electricity prices and PV energy production uncertainty, while ensuring fairness among prosumers. Validation includes a large-scale simulation with over 10,000 EVs, comparing our approach to "dumb" charging and centralized mixed-integer linear programming optimization (computed only at a small-scale). Evaluation criteria encompass cost optimization, constraint satisfaction, fairness, and solution speed. Our results demonstrate the effectiveness of our decentralized approach.

11:50
Electric Vehicle Coordinated Charging through Hierarchical Optimization models

ABSTRACT. The adoption of electric vehicles (EVs) has arisen as a good opportunity to reduce the transportation sector's contribution to global warming. The latter is given the fact that EVs do not produce greenhouse gases (GHGs) either when driving or idle. As a consequence, it is estimated that roughly 100 million EVs will be under use by 2030 around the world. However, if this becomes a reality, the energy generation must increase and, henceforth, imply -heavy- extra load and power variations to the grid. In order to tackle these negative effects, the so-called \textit{coordinated charging} has been coined and recently studied to allocate power and supply energy such that power availability and infrastructure constraints are not violated. Moreover, the inclusion of renewable energy sources (RESs) and energy storage systems (ESSs) appear as the grid's most important allies. Accordingly, several modeling techniques have been adopted regarding the EV profiles and grid characteristics. Regardless of the model, it aims to control the energy supply and therefore restrict the maximum power, increasing the dwelling-time.

In general, the dwell-time flexibility is replaced by fixed but longer dwelling-times, e.g., at homes (night) and at workplaces (working hours), representing an important opportunity for new approaches as well as properly supply power since fully-charged batteries are not required at a single session. As such, the so-called vehicle-to-grid (V2G, a.k.a bi-directional power flow) has gained popularity giving economic benefits to EV owners by representing a distributed energy resource (DER).

There exists a rich literature regarding coordinated charging and the aforementioned elements, however, stakeholders' inherent relationships have just recently been considered. This work precisely overviews studies addressing the coordinated charging problem through hierarchical optimization models, along with the adopted resolution methods, and proposes a novel methodology to consolidate the existing studies. To this end, problems found in the literature are firstly characterized by identifying the elements involved in the phenomenon, such that a novel taxonomy is proposed to deal with the huge number of possible combinations a study may address.

Thus, the goal of this work is three-fold: i) to trace the evolution of models exploiting the hierarchical relationship between stakeholders in the problem under study, ii) to consolidate the existing literature, and iii) to set an identification method in order to compare state-of-the-art studies.

11:20-12:50 Session 5F: Nonlinear programming : nonconvexity and nonsmoothness
Location: A1.133
11:20
Difference-of-Convex Approach to Chance-Constrained Optimal Power Flow modelling the DSO Power Modulation Lever for Distribution Networks
PRESENTER: Ksenia Syrtseva

ABSTRACT. The increasing expansion of renewable energy sources leads to the growth of uncertainty in the distribution network operation. Short-term operational planning performed by distribution system operators should evolve to address those new operating conditions, in particular to allow the efficient utilization of different flexibility levers. In this work, the use of a chance-constrained Alternating Current Optimal Power Flow (AC-OPF) is proposed to model the operational planning problem, considering the activation of several levers such as power modulation and power curtailment. The correlation between the renewable generation profiles and loads is considered via a joint probability constraint approximated with scenarios. The main novelty of the present manuscript is the adoption of a Difference-of-Convex approach that allows to solve the obtained optimization problem without convexification or linearization of the core OPF equations. Furthermore, the approach yields a natural and embarrassingly parallelizable scenario decomposition. The method starts with a reformulation of the model as a Difference-of-Convex optimization problem, and then a proximal bundle method algorithm is applied to solve it. The proposed methodology is tested in a 33 bus distribution network with 11 different values for the safety level defining the probability constraint, ranging from 0.75 to 1.

11:50
An implementable proximal-type method for computing critical points to minimization problems with a nonsmooth and nonconvex constraint

ABSTRACT. Modern optimization problems from stochastic optimization, machine learning, uncertainty quantification, and others often involve functions that fail to be smooth and convex. Nevertheless, many such problems possess some structure that can and should be exploited in numerical optimization. This work investigates implementable necessary optimality conditions for a broad class of nonsmooth and nonconvex optimization problems involving objective and constraint functions that can be expressed as a finite sum of pointwise maxima of finitely many weakly-concave functions. Such a setting is general enough to accommodate nonconvex reliability-based optimization problems for which mathematically sound algorithms are scarce. Application areas include energy management, electric power systems, transportation, engineering design, and manufacturing. Our main contribution is a proximal-type algorithm with convergence guarantees to critical points. Our approach does not require penalization nor a feasible initial point. Numerical experiments on some stochastic reliability-based optimization problems illustrate the practical performance of the method.

12:20
Accelerating nonlinear programming with GPUs

ABSTRACT. In this talk, we present a roadmap to implement an interior-point nonlinear solver on SIMD (Single Instructions, Multiple Data) architectures, such as GPUs. The interior-point method has become the workhorse method to solve large-scale optimization problems. By using a homotopy method, IPM obtains a smooth reformulation of the Karush-Kuhn-Tucker (KKT) conditions associated with the optimization problem. This allows us to compute the next descent direction directly by solving a KKT linear system, usually using a sparse indefinite solver in the nonconvex case. However, IPM's performance is limited by the indefinite sparse linear solver used under the hood, which has impaired its adoption on hardware accelerators such as GPUs. Luckily, we can rewrite the KKT system in a form more amenable to GPUs using different variants of the Schur complement method. Notably, we show that we can leverage the structure of the inequality and equality constraints using a condensed then reduced strategy to compress the sparse KKT system down to a dense matrix, easy to factorize. When used in conjunction with a parallel automatic differentiation backend, this strategy is effective in solving large-scale nonlinear problems on GPUs. We give extensive numerical results on large-scale optimal power flow (OPF) instances to illustrate the effectiveness of the method compared to the state-of-the-art.

11:20-12:20 Session 5G: Shape optimization, PDE, and beyond
Location: A1.134
11:20
Physics-informed neural networks, PDE solving, hybrid modeling
PRESENTER: Nathan Doumèche

ABSTRACT. Physics-informed neural networks (PINNs) combine the expressiveness of neural networks with the interpretability of physical modeling. Their good practical performance has been demonstrated both in the context of solving partial differential equations and in the context of hybrid modeling, which consists of combining an imperfect physical model with noisy observations. However, most of their theoretical properties remain to be established. We offer some statistical guidelines into the proper use of PINNs.

11:50
Accessibility Constraints in Shape Optimization
PRESENTER: Beniamin Bogosel

ABSTRACT. In this work we are interested in defining, quantifying and optimizing the accessibility of a shape with respect to a given source region. This question is motivated by practical aspects related to support structures in additive manufacturing. Support structures are used to guarantee the correct fabrication of a given part and these structures are removed at the end of the manufacturing process. To remove the supports, the contact region between the part and supports needs to be accessed with particular tools.

Two notions of accessibility in a straight line from a given source region are proposed, which from a mathematical point of view are expressed using distance functions, solutions of eikonal equations. The advantage of these criteria used for measuring the accessibility is the possibility to compute their shape derivatives. This allows us to implement numerical algorithms which optimize the accessibility with respect to variable targets and obstacles. Notably, it is possible to optimize simultaneously the mechanical performances of a structure and the accessibility of its supports.

11:20-12:50 Session 5H: Optimization and statistics I
Location: A1.139
11:20
An Empirical Case of Gaussian Processes Learning in High Dimension: the Likelihood versus Leave-One-Out Rivalry
PRESENTER: David Gaudrie

ABSTRACT. Gaussian Processes (GPs) are semi-parametric models commonly employed in various real-world applications such as statistical modeling and sensitivity analysis. They play an instrumental role in Bayesian optimization since they determine the sequence of iterates where the objective function is evaluated. GPs are particularly useful in the context of small data when there are of the order of a hundred to a thousand points to learn. However, GPs suffer particularly from the curse of dimensionality: at a fixed number of data points, their predictive capability may decrease dramatically in high dimension (d > 40). In this talk, we investigate such a phenomenon in details. We illustrate this loss of performance with increasing dimension on simple functions and analyze its underlying symptoms, in particular a tendency to become constant away from the data points. We show that the fundamental problem is one of learning and not one of representation capacity: maximum likelihood, the dominant loss function for such models, can miss regions of optimality of the GP hyperparameters. Failure of maximum likelihood is related to statistical model inadequacy: a model with constant trend is sensitive to dimensionality when tting quadratic functions while it much better handles dimension growth for linear functions or Gaussian trajectories generated with the right covariance. Our experiments also show that the leave-one-out loss function is less prone to the curse of dimensionality even for inadequate statistical models.

11:50
Optimal Trees and Multivariate Functional Data

ABSTRACT. In this talk, we tailor optimal regression trees to deal with multivariate functional data. A compromise between prediction accuracy and sparsity, as a proxy for interpretability, is sought. In particular, whilst fitting the tree model, the detection of a reduced number of predictor variables and the proportion of the domain used by the model is performed. This is achieved through the inclusion of LASSO-type regularization terms. The resulting optimization problem can be formulated as a nonlinear continuous optimization model with linear constraints. We illustrate the performance of our approach on real-world and synthetic datasets.

12:20
Complexity analysis of regularization methods for implicitly constrained least squares
PRESENTER: Clément Royer

ABSTRACT. Optimization problems constrained by partial differential equations (PDEs) naturally arise in scientific computing, as those constraints often model physical systems or the simulation thereof. In an implicitly constrained approach, the constraints are incorporated into the objective through a reduced formulation. To this end, a numerical procedure is typically applied to solve the constraint system, and efficient numerical routines with quantifiable cost have long been developed. Meanwhile, the field of complexity in optimization, that estimates the cost of an optimization algorithm, has received significant attention in the literature, with most of the focus being on unconstrained or explicitly constrained problems. In this paper, we analyze an algorithmic framework based on quadratic regularization for implicitly constrained nonlinear least squares. By leveraging adjoint formulations, we can quantify the worst-case cost of our method to reach an approximate stationary point of the optimization problem. Our definition of such points exploits the least-squares structure of the objective, leading to an efficient implementation. Numerical experiments conducted on PDE-constrained optimization problems demonstrate the efficiency of the proposed framework.

11:20-12:50 Session 5I: Invited session: First Order Optimization: Acceleration and Convergence, organized by Émilie Chouzenoux and Antonin Chambolle
Location: Auditorium
11:20
Stochastic Primal Dual Hybrid Gradient Algorithm with Adaptive Step-Sizes

ABSTRACT. In this work we propose a new primal-dual algorithm with adaptive step-sizes. The stochastic primal-dual hybrid gradient (SPDHG) algorithm with constant step-sizes [1, 2] has become widely applied in large-scale convex optimization across many scientific fields due to its scalability. While the product of the primal and dual step-sizes is subject to an upper-bound in order to ensure convergence, the selection of the ratio of the step-sizes is critical in applications. Up to now there is no systematic and successful way of selecting the primal and dual step-sizes for SPDHG. In this work, we propose a general class of adaptive SPDHG (A-SPDHG) algorithms, and prove their convergence under weak assumptions. We also propose concrete parametersupdating strategies which satisfy the assumptions of our theory and thereby lead to convergent algorithms. Numerical examples on computed tomography demonstrate the effectiveness of the proposed schemes.

11:50
A new version of the Kurdyka-Lojasiewicz property to better study the convergence of stochastic optimization algorithms in a non-convex framework

ABSTRACT. Typical well-built first order stochastic schemes satisfy desirable properties, including conditional descent inequalities and almost sure (a.s.) convergence guarantees on the objective function F, as well as on the involved gradient. In the convex framework, the geometry of F often allows technical shortcuts, leading to the a.s. convergence of the algorithm iterates to a global minimiser. By contrast, for non-convex objective functions, such type of convergence cannot be easily guaranteed. We introduced a new avenue of work to bridge the gap between descent-type inequalities and a.s. convergence of the associated stochastic process despite the lack of convexity of F. To do so, we build upon the strategy which initially consists in using the Kurdyka-Lojasiewicz (KL) property to prove the finite length of a discrete deterministic process (without any convexity assumptions). However, the stochastic setting does not allow the use the KL theory as such. As a response to this challenge, we propose an alternative version of the generic KL inequality, so as (i) to be consistent with the tools involved in usual proofs of the stochastic approximation theory and (ii) to promote the existence of a.s. convergence results for the iterates.

12:20
Provable non-accelerations of the heavy-ball method

ABSTRACT. In this work, we show that the heavy-ball (HB) method provably does not reach an accelerated convergence rate on smooth strongly convex problems. More specifically, we show that for any condition number and any choice of algorithmic parameters, either the worst-case convergence rate of HB on the class of L-smooth and μ-strongly convex quadratic functions is not accelerated (that is, slower than 1 − O(κ)), or there exists an L-smooth μ-strongly convex function and an initialization such that the method does not converge. To the best of our knowledge, this result closes a simple yet open question on one of the most used and iconic first-order optimization technique. Our approach builds on finding functions for which HB fails to converge and instead cycles over finitely many iterates. We analytically describe all parametrizations of HB that exhibit this cycling behavior on a particular cycle shape, whose choice is supported by a systematic and constructive approach to the study of cycling behaviors of first-order methods. We show the robustness of our results to perturbations of the cycle, and extend them to class of functions that also satisfy higher-order regularity conditions.

12:50-14:10Lunch Break
14:10-15:00 Session 6: Plenary 2: Michel Goemans
Location: Auditorium
14:10
Submodular Functions on Modular Lattices

ABSTRACT. Submodular set functions are central to discrete optimization and are ubiquitous in many areas, including machine learning. To some extent, submodular functions can be viewed as the analog of convex functions in the continuous setting. In this talk, I will first introduce and present basic properties of submodular functions and algorithms for associated optimization problems. I will then discuss extensions to the much less studied setting of submodular functions on lattices, especially modular lattices. This perspective will allow us to derive a surprising (and easy) result, connecting two problems whose solutions are seemingly unrelated.

15:30-16:00Coffee Break
16:00-17:30 Session 8A: Decomposition strategies for large scale problems
Location: A1.134
16:00
Computing Wasserstein Barycenter via operator splitting: the Method of Averaged Marginals
PRESENTER: Daniel Mimouni

ABSTRACT. The Wasserstein barycenter (WB) is an important tool for summarizing sets of probabilities. It finds applications in applied probability, clustering, image processing, etc. When the probability supports are finite and fixed, the problem of computing a WB is formulated as a linear optimization problem whose dimensions generally exceed standard solvers' capabilities. For this reason, the WB problem is often replaced with a simpler nonlinear optimization model constructed via an entropic regularization function so that specialized algorithms can be employed to compute an approximate WB efficiently. Contrary to such a widespread inexact scheme, we propose an exact approach based on the Douglas-Rachford splitting method applied directly to the WB linear optimization problem for applications requiring accurate WB. Our algorithm, which has the interesting interpretation of being built upon averaging marginals, operates series of simple (and exact) projections that can be parallelized and even randomized, making it suitable for large-scale datasets. As a result, our method achieves good performance in terms of speed while still attaining accuracy. Furthermore, the same algorithm can be applied to compute generalized barycenters of sets of measures with different total masses by allowing for mass creation and destruction upon setting an additional parameter. Our contribution to the field lies in the development of an exact and efficient algorithm for computing barycenters, enabling its wider use in practical applications. The approach's mathematical properties are examined, and the method is benchmarked against the state-of-the-art methods on several data sets from the literature.

16:30
Parallel approaches to large-scale stochastic energy system design problems

ABSTRACT. We describe our latest computational experience in solving large-scale energy network design problems. This is based on a three-level decomposition approach, where a Benders-style upper level is used to manage the (continuous) design variables, the middle level is a SDDP approach to compute the long-term energy system cost (with given decisions) taking into account the yearly effect of energy storages, and the lower-level is the Lagrangian Dual (whose automatic convexification properties provide the necessary dual information to SDDP) of the weekly Unit Commitment problem. Implementing such a complex approach to an extremely-large-scale problem on significant parallel hardware (hundredths of cores, above a Terabyte of RAM) is only made possible by the use of the advanced SMS++ modelling framework. We report on the several crucial aspects of the implementation, such as how the available computing power is best subdivided between the different algorithmic levels (different scenarios in SDDP, different oracles in the Lagrangian Dual, ...), the impact of the different modelling choices in the Unit Commitment, the use of Lagrangian Dual vs. a continuous relaxation, and others. Our results show that obtaining solutions of problems of hitherto unheard-of size and complexity is possible with the combined use of sophisticated solution approaches, powerful computing hardware and advanced software techniques.

17:00
Column Generation Approaches for the Satellites Constellation Design Problem
PRESENTER: Andrea Bellè

ABSTRACT. The satellites constellation design problem (SCDP) represents a fundamental aerospace application [1,2]: it consists in designinig a constellation with the minimum possible number of satellites in order to guarantee periodic observability constraints within a fixed revisit time window and with respect to a given discrete set of target points on the Earth surface. This problem can be formulated as a large dimensional mixed-integer linear program (MILP) by suitably discretising the satellites orbital parameters. Under the assumption that two satellites cannot share the same (Keplerian) orbit in the optimal constellation configuration, we can easily reformulate the SCDP by assuming the number of satellites to be equal to the number of orbits. In this way, the SCDP consists in identifying the minimum number of activated satellites. In such a framework, the number of variables, although it grows polynomially with respect to the input size, can become prohibitively large in realistic instances, making the problem computationally intractable.

In order to address this issue, we propose two column generation approaches, based on different characterisations of restricted master (RMP) and pricing (PP) problems. In the first one we relax the observability constraints linking different satellites in the RMP, and we exploit it to generate new orbits with negative associated reduced costs in the PP; in the second one, we apply a strategy based on an edge covering problem on a opportunely defined multi-graph. For both the column generation approaches, the initialisation phase needed to define an initial feasible solution is implemented by locally solving a quite small number of non-convex nonlinear programs (NLPs) through a simple multi-start algorithm (in realistic instances, very few iterations are necessary to obtain convergence).

We present computational experiments on realistic instances, in order to demonstrate the efficacy and efficiency of our novel algorithmic framework. We conclude by briefly sketching two proving strategies for showing the NP-hardness of the SCDP.

[1] Mencarelli, L., Floquet, J., Georges, F., Grenier, D., Mixed integer (non)linear approaches for the satellite constellation design problem, Optimization and Engineering pp. 1--22 (2022)

[2] Mencarelli, L., Floquet, J., Georges, F., Matheuristics approaches for the satellite constellation design problem, Optimization and Engineering, to appear.

16:00-17:30 Session 8B: Invited session - Algorithms in Generalized Convexity. Applications to Machine Learning and Sparse Optimization, part 2, organized by Michel De Lara
Location: Amphi II
16:00
Learning Energy Networks with Generalized Fenchel-Young Losses
PRESENTER: Mathieu Blondel

ABSTRACT. Energy-based models, a.k.a. energy networks, perform inference by optimizing an energy function, typically parametrized by a neural network. This allows one to capture potentially complex relationships between inputs and outputs. To learn the parameters of the energy function, the solution to that optimization problem is typically fed into a loss function. The key challenge for training energy networks lies in computing loss gradients, as this typically requires argmin/argmax differentiation. In this paper, building upon a generalized notion of conjugate function, which replaces the usual bilinear pairing with a general energy function, we propose generalized Fenchel-Young losses, a natural loss construction for learning energy networks. Our losses enjoy many desirable properties and their gradients can be computed efficiently without argmin/argmax differentiation. We also prove the calibration of their excess risk in the case of linear-concave energies. We demonstrate our losses on multilabel classification and imitation learning tasks.

16:30
Fast, Differentiable and Sparse Top-k: a Convex Analysis Perspective
PRESENTER: Michael Sander

ABSTRACT. The top-$k$ operator returns a sparse vector, where the non-zero values correspond to the $k$ largest values of the input. Unfortunately, because it is a discontinuous function, it is difficult to incorporate in neural networks trained end-to-end with backpropagation. Recent works have considered differentiable relaxations, based either on regularization or perturbation techniques. However, to date, no approach is fully differentiable \emph{and} sparse. In this talk, we propose new differentiable \emph{and} sparse top-$k$ operators. We view the top-$k$ operator as a linear program over the permutahedron, the convex hull of permutations. We then introduce a $p$-norm regularization term to smooth out the operator, and show that its computation can be reduced to isotonic optimization. Our framework is significantly more general than the existing one and allows for example to express top-$k$ operators that select values \emph{in magnitude}. On the algorithmic side, in addition to pool adjacent violator (PAV) algorithms, we propose a new GPU/TPU-friendly Dykstra algorithm to solve isotonic optimization problems. We successfully use our operators to prune weights in neural networks, to fine-tune vision transformers, and as a router in sparse mixture of experts.

17:00
Gradient descent with a general cost
PRESENTER: Flavien Léger

ABSTRACT. We present a new class of gradient-type optimization methods that extends vanilla gradient descent, mirror descent, Riemannian gradient descent, and natural gradient descent. Our approach involves constructing a surrogate for the objective function in a systematic manner, based on a chosen cost function. This surrogate is then minimized using an alternating minimization scheme. Using optimal transport theory we establish convergence rates based on generalized notions of smoothness and convexity. We provide local versions of these two notions when the cost satisfies a condition known as nonnegative cross-curvature. In particular our framework provides the first global rates for natural gradient descent and the standard Newton’s method.

16:00-17:30 Session 8C: Invited session: Quantum computing and combinatorial optimisation, organized by Andrea Simonetto
Location: Auditorium
16:00
Optimizing Variational Circuits for Higher-Order Binary Optimization
PRESENTER: Andrea Simonetto

ABSTRACT. Variational quantum algorithms have been advocated as promising candidates to solve combinatorial optimization problems on near-term quantum computers. Their methodology involves transforming the optimization problem into a quadratic unconstrained binary optimization (QUBO) problem. While this transformation offers flexibility and a ready-to-implement circuit involving only two-qubit gates, it has been shown to be less than optimal in the number of employed qubits and circuit depth, especially for polynomial optimization. On the other hand, strategies based on higher-order binary optimization (HOBO) could save qubits, but they would introduce additional circuit layers, given the presence of higher-than-two-qubit gates. In this talk, we present HOBO problems and propose new approaches to encode their Hamiltonian into a ready-to-implement circuit involving only two-qubit gates. Our methodology relies on formulating the circuit design as a combinatorial optimization problem, in which we seek to minimize circuit depth. We also propose handy simplifications and heuristics that can solve the circuit design problem in polynomial time. We evaluate our approaches by comparing them with the state of the art, showcasing clear gains in terms of circuit depth.

16:30
Classic-inspired quantum generation scheme for the parallel sequencing problem
PRESENTER: Thibaud Ardant

ABSTRACT. The Resource-Constrained Project Scheduling Problem (RCPSP) is to find a minimum makespan schedule for the jobs of a project under precedence constraints with limited renewable resource availability [1]. The RCPSP is an NP-hard problem known to be computationally chal- lenging to solve. In the present work, we explore quantum computing approaches for the parallel sequencing variant defined as a special case of the RCPSP, where all jobs have a unit processing time and a unique predecessor. This variant is a generalization of Hu’s polynomial case [2] to non unit renewable resources. The key principle is to derive a feasible solution, namely a schedule, from a job permutation, i.e., an order of the jobs respecting the precedence constraints. Our approach relies on the quantum amplitude amplification algorithm [3] which requires to mark elements by a function called oracle. Taking benefit from the quantum superposition, this algorithm achieves a quadratic speedup on a large family of search problems for which good classical heuristics exist. The oracle is inspired from a classical parallel Schedule Generation Scheme (SGS) [4]. Parallel SGS is a method iterating over time periods to build a schedule based on priority rules. The oracle simulates the generation scheme on a predefined time horizon T , subsequently marking the permutations for which all jobs can be completed within the given horizon. The classic-inspired quantum generation scheme is as follows. A preprocessing step computes an upper bound ̄T on the optimal makespan using a classical heuristic. This upper bound is used as an input for the oracle which marks improving solutions. The quantum algorithm starts by generating a superposition of all states corresponding to valid permutations w.r.t the precedence constraints. Then the amplitude amplification algorithm amplifies the density of marked solutions. Finally a measurement retrieves an improving solution if exists. The oracle and the generation of the superposition restricted to valid permutations have been implemented using Qiskit library. Preliminary experiments performed on small instances exhibit the capability of the oracle to mark the improving valid solutions, thus tending to provide a proof of concept. A perspective would be to set upper bound ̄T iteratively to a value corresponding only to the few best solutions by performing a quantum counting [3], thus estimating the number of marked solutions. Such a scheme would solve the problem to optimality while taking advantage of quadratic speed-up obtained from both amplitude amplification and estimation compared to their deterministic counterpart.

17:00
Quantum Contextual Optimal Transport
PRESENTER: Nicola Mariella

ABSTRACT. The theory of optimal transport (OT) is a mathematical framework that focuses on finding the optimal transport plan or map that minimizes the cost of moving resources from a source distribution to a target distribution. The cost is typically defined as a distance or a dissimilarity measure between the source and target points, and the optimal transport plan aims to minimize this cost while satisfying certain constraints. In biology, OT gained popularity in various single-cell data analysis tasks, ranging from reconstructing trajectories of cell evolution, over predicting cell responses of therapeutical interventions and inferring spatial and signalling relationships between cells. In these applications, the source and target distributions correspond to measurements of different biomolecules (e.g., mRNA, proteins) at a single-cell level. In many applications, data measurements of source and target distributions are coupled to a context variable. In such cases, one might aspire to learn a global transportation map that is parameterised through the context and thus facilitates the prediction of target states (distributions) from source states, even for an unseen context. This extension of OT, that is the contextual OT, consists of a model that predicts the transportation map conditioned on a context variable. We propose a quantum formulation that leverages on the following concept. Given any unitary matrix (possibly depending on context and learning parameters), we have that unitarity implies that the Hadamard product of the unitary with its complex conjugate is always a doubly stochastic matrix (DSM), that is a special case of transportation map. Also, we prove that a generic transportation map can be embedded into a DSM. This concept shows that there is a natural link between transportation maps and unitary operators. Moreover, the latter could be a key component for an inductive bias that is well-suited for quantum computation.

16:00-17:30 Session 8D: Evolutionary algorithms
Location: A1.116
16:00
Evolutionary Algorithm Outperforms MILP Solver on the Double-Floor Single-Row Facility Layout Problem
PRESENTER: Junqi Liu

ABSTRACT. We consider facility planning and design in the single-row facility layout problem (SRFLP) [1] for double-floor spaces to minimize land and material handling costs by efficiently arranging facilities along a given transport side of double-floor spaces. Previously, this problem has been solved based on a mixed-integer linear programming formulation. Our fast (1+1) EA operates by initially constructing a solution that aligns with the characteristics of SRFLP. It then proceeds to determine mutations using a heavy-tailed distribution function. To generate new solutions, we employ local insert and swap operators and incorporate a restart strategy into the process. We demonstrate the practical value of the fast (1+1) EA for DF-SRFLP on numerical experiments (9-80 facility-scale datasets) and real-world instances, including assembly workshops (size 14) and footwear manufacturing workshops (size 54). The experimental results show that under the numerical experiments: the solve time of the MILP solver (GUROBI 10.0.1) grows exponentially with the problem size, while (1+1) EA is relatively stable. And the large-scale, MILP solver is not able to give suitable values in a reasonable time. In a practical application involving a 14-scale assembly line shop, the 1+1 EA achieves a layout that reduces material handling costs by 24 % compared to the original layout form. Furthermore, in the context of a 54-scale shoe factory, our proposed algorithm not only successfully addresses the DF-SRFLP but also offers a valuable reference for establishing an improved layout.

16:30
Wasserstein Barycenter-based Evolutionary Algorithm for the optimization of sets of points
PRESENTER: Babacar Sow

ABSTRACT. We consider the problem of optimizing black-box functions having sets of points as inputs. Functions defined over sets are of great practical use. Examples of points sets arise to describe turbine positions in wind farms, designs of experiments, designs of sensors or actuators networks. In this work, we propose an evolutionary algorithm for functions defined over clouds of points. The size of the clouds varies. The clouds of points are represented as discrete uniform measures. We use the Wasserstein barycenter to define crossover and mutation operators.

17:00
Runtime Analyses of Multi-Objective Evolutionary Algorithms in the Presence of Noise
PRESENTER: Benjamin Doerr

ABSTRACT. In single-objective optimization, it is well known that evolutionary algorithms also without further adjustments can stand a certain amount of noise in the evaluation of the objective function. In contrast, this question is not at all understood for multi-objective optimization.

In this work~\cite{DinotDHW23}, we conduct the first mathematical runtime analysis of a simple multi-objective evolutionary algorithm (MOEA) on a classic benchmark in the presence of noise in the objective function. We prove that when bit-wise prior noise with rate $p \le \alpha/n$, $\alpha$ a suitable constant, is present, the \emph{{simple} evolutionary multi-objective optimizer} (SEMO) without any adjustments to cope with noise finds the Pareto front of the OneMinMax benchmark in time $O(n^2\log n)$, just as in the case without noise. Our proofs suggest that the strong robustness of the MOEA stems from its implicit diversity mechanism designed to enable it to compute a population covering the whole Pareto front.

Interestingly this result only holds when the objective value of a solution is determined only once and the algorithm from that point on works with this, possibly noisy, objective value. We prove that when all solutions are reevaluated in each iteration, then any noise rate $p = \omega(\log(n)/n^2)$ leads to a super-polynomial runtime. This is very different from single-objective optimization, where it is generally preferred to reevaluate solutions whenever their fitness is important and where examples are known such that not reevaluating solutions can lead to catastrophic performance losses.

This work is part of the PGMO-funded project \emph{Mathematical Analysis of Complex Randomized Search Heuristics} (PI: Benjamin Doerr).

16:00-17:30 Session 8E: Logistics
Chair:
Location: A1.139
16:00
Renault's Forward Logistic Problem
PRESENTER: Mathis Brichet

ABSTRACT. The inbound supply chain of a carmaker is designed to transport tens of thousands of types of car parts from thousands of suppliers to dozens of plants. This logistics operation represents tens of millions of euros and tens of thousands of tons of CO2 emissions per year. The arrival of a new vehicle in the catalog of a carmaker requires a redesign of the network. The resulting problem is a rich version of a multicommodity network design problem. Its difficulty stems from the combination of a large network and a huge number of commodities to be routed, nearly a million. The considered problem is therefore orders of magnitude bigger than what usual resolution methods from the litterature can solve. We model this network design problem and propose a heuristic algorithm to solve it in reasonable time. We provide a data analysis of their instance and a lower bound to analyze algorithm performance. Numerical experiments show significant improvement over Renault's current solution but also point to a large margin of possible improvement. We finally conclude on the future research perspectives arising from the insights collected in this work.

16:30
A Branch-and-Cut algorithm for the Balanced Traveling Salesman Problem

ABSTRACT. The balanced traveling salesman problem (BTSP) is a variant of the traveling salesman problem, in which one seeks a tour that minimizes the difference between the largest and smallest edge costs in the tour. The BTSP, which is obviously NP-hard, was first investigated by Larusic and Punnen in 2011 [1]. They proposed several heuristics based on the double-threshold framework, which converge to good-quality solutions though not always optimal. In this paper, we design a special-purpose branch-and-cut algorithm for exactly solving the BTSP. In contrast with the classical TSP, due to the BTSP's objective function, the efficiency of algorithms for solving the BTSP depends heavily on determining correctly the largest and smallest edge costs in the tour. In the proposed branch-and-cut algorithm, we develop several mechanisms based on local cutting planes, edge elimination, and variable fixing to locate those edge costs more precisely. Other critical ingredients in our method are algorithms for initializing lower and upper bounds on the optimal value of the BTSP, which serve as warm starts for the branch-and-cut algorithm. Experiments on the same testbed of TSPLIB instances show that our algorithm can solve 63 out of 65 instances to proven optimality.

17:00
Pickup and Delivery Problem with Cooperative Robots
PRESENTER: Chi Nguyen

ABSTRACT. This paper explores the Pickup and Delivery Problem with Cooperative Robots (PDP-CR), a new interest that has emerged in warehouse settings with automated vehicles where multiple robots must cooperate to complete a task. i.e. when a task requires more than one robot. In PDP-CR, a fleet of identical robots has to handle a set of tasks. Each task consists of a pickup point, a destination, processing time, and the number of robots required. The primary objective is to complete all tasks using the initial robots while minimizing the makespan, representing the time until the last robot returns to the depot. PDP-CR is an NP-hard problem. In this paper, we will propose two MILP formulations that are solved using CPLEX. The primary goal of this paper is to introduce PDPC-CR, develop, test, and compare the mathematical models proposed for it.

16:00-17:30 Session 8F: Large scale optimization
Location: Amphi I
16:00
Monitoring the Convergence Speed of PDHG to Find Better Primal and Dual Step Sizes

ABSTRACT. Primal-dual algorithms for the resolution of convex-concave saddle point problems usually come with one or several step size parameters. Within the range where convergence is guaranteed, choosing well the step size can make the difference between a slow or a fast algorithm. A usual way to set step sizes is to ensure that there is a fair balance between primal and dual variable's amount of change. This can be done adaptively and gives a much better performance than fixed step sizes.

In this work, we show that on quadratic problems, it is possible to find even better step sizes. We base our method on a spectral radius estimation procedure and try to minimize this spectral radius, which is directly related to the rate of convergence. Building on power iterations, we could produce spectral radius estimates that are always smaller than 1 and work also in the case of conjugate principal eigenvalues.

For strongly convex quadratics, we show that our step size rules yields an algorithm as fast as inertial gradient descent. Moreover, since our spectral radius estimates only rely on residual norms, our method can be readily adapted to more general convex-concave saddle point problems.

16:30
A Speed Restart Scheme for a Dynamics with Hessian-Driven Damping
PRESENTER: Juan Peypouquet

ABSTRACT. In this talk, we analyze a speed restarting scheme for the inertial dynamics with Hessian-driven damping, introduced by Attouch et al. (J Differ Equ 261(10):5734– 5783, 2016). We establish a linear convergence rate for the function values along the restarted trajectories. Numerical experiments suggest that the Hessian-driven damping and the restarting scheme together improve the performance of the dynamics and corresponding iterative algorithms in practice.

17:00
The Smoothed Duality Gap as a Stopping Criterion
PRESENTER: Iyad Walwil

ABSTRACT. We optimize the running time of the primal-dual algorithms by optimizing their stopping criteria for solving convex optimization problems under affine equality constraints. We study the relations between several stopping criteria and show under which conditions they are accurate to detect optimal solutions, which means terminating the algorithm earlier with fewer iterations. Basically, we examine the accuracy of three stopping criteria: the theoretically, uncomputable in practice "Optimality gap and Feasibility error", the widely-used "Karush-Kuhn-Tucker error", and the new, widely-applicable but less well-studied "Smoothed Duality Gap". Assuming metric sub-regularity, and quadratic error bound of the smoothed gap, we establish that both the KKT error and smoothed duality gap provide practical upper bounds for the optimality gap, and approximate it effectively. Furthermore, we establish comparability between the KKT error and smoothed duality gap under certain conditions. Numerical experiments on quadratic programs, basis pursuit, and quadratic programs with non-negative weights corroborate these findings and show the superior stability of the smoothed duality gap over the KKT error.

16:00-17:30 Session 8G: Pricing strategies
Location: A1.122
16:00
Optimal Carbon Pricing Based on Dynamical Climate Model

ABSTRACT. In this work, we adapt an optimal pricing framework to the case of carbon pricing by including a climate model. We consider a set of countries willing to maximize their own utility function that depends on their CO2 emission and the price of carbon. To fix this price, we introduce a regulatory authority willing to maximize a global utility function that takes the climate damages into account. These damages are a function of the temperature, which is coupled to the CO2 emissions through a climate model that includes a carbon cycle model and a temperature dynamic model. After deriving the optimal emission level for a given price, we analyze the performances of the updated sequence of such a pricing method over a given timespan compared to a fixed price.

16:30
Optimal Price Menu Design of Electric Vehicle Charging Stations
PRESENTER: Alix Dupont

ABSTRACT. The growing adoption of Electric Vehicles (EVs) presents many challenges and opportunities for the Charging Station Operators (CSOs), who are called to determine their pricing strategies considering the EV user charging behavior. In our model, the CSO “price menu” proposal consists of different prices for different levels of charging rates (unlike cases where prices are associated with different delay completion times). We formulate the proposed price menu design problem as a bi-level optimization problem. At the lower level, the EV users decide to choose one of the options in the menu or they may choose not to charge. At the upper level, the CSO designs the prices per charging rates. We provide analytical and numerical results on the optimal price menu design, considering in the upper level (i) the maximization of the CSO’s profit, and (ii) the maximization of the social welfare. We investigate the properties of the optimal price menu under both complete information of the EV users utility, and incomplete information, where the CSO has only knowledge on the distribution of the utility across the EV users population.

17:00
Pricing strategy for dual-channel retailing using a stochastic attraction-demand model
PRESENTER: Minh Tam Tran

ABSTRACT. Nowadays, many retailers sell their goods to customers online and through brick-and-mortar stores, creating dual-channel supply chains. Determining the optimal selling prices under customer behavior is a crucial issue in dual-channel supply chains. In this study, we propose and investigate a dual-channel pricing model, with the customers' behavior on channel choice. Motivated by market-share models, we assume a demand model involving attraction between channels: online and physical stores. We consider a stochastic assumption for the demand of the potential market and price-based interaction between the demand of the two channels. In contrast to previous works, within the attraction model, the channel's stochastic demand involves a nonlinear function of prices. We permit different reactions of customers when the physical store is stock-out. The contribution of the paper is twofold: First, the results highlight that the joint concavity of the retailer's expected profit function with respect to selling prices is analytically complex to verify. We introduce a novel approach to show the existence of optimal global prices with non-linear demand and non-linear objective function. The second contribution is related to the applicability in the practice of the model. We run the model with various operational settings, and we provide managerial insights on the best pricing strategy to apply when the physical store capacity is limited.

16:00-17:30 Session 8H: Stackelberg games
Location: A1.133
16:00
Optimal location of charging station: a Single-Leader-Multi-Follower approach with cardinality constraints
PRESENTER: Aussel Didier

ABSTRACT. When deciding about the location of charging stations for electrical vehicles, different conflicting aims can enter into the scope: minimal cost for investors, minimal inconvenience for EV owners, limited number of stations... In order to take into account all these features in an optimal way, we analyzed different Single-Leader-Multi-Follower models in which cardinality constraints are intrinsically integrated.

16:30
Learning the Follower's Objective Function in Sequential Bilevel Games
PRESENTER: Martin Schmidt

ABSTRACT. We consider bilevel optimization problems in which the leader has no or only partial knowledge about the objective function of the follower. The studied setting is a sequential one in which the bilevel game is played repeatedly. This allows the leader to learn the objective function of the follower over time. We focus on two methods: a multiplicative weight update (MWU) method and one based on the lower-level's KKT conditions that are used in the fashion of inverse optimization. The MWU method requires less assumptions but the convergence guarantee is also only on the objective function values, whereas the inverse KKT method requires stronger assumptions but actually allows to learn the objective function itself. The applicability of the proposed methods is shown using two case studies. First, we study a repeatedly played continuous knapsack interdiction problem and, second, a sequential bilevel pricing game in which the leader needs to learn the utility function of the follower.

17:00
CHARACTERIZATION OF THE SOLUTIONS OF SINGLE-LEADER-MULTI-FOLLOWERS GAMES WITH COMMON CONSTRAINTS

ABSTRACT. Nowadays non-cooperative games have many applications. A particular case is the following electricity contract problem. Consider there are certain packages the aggregator wants to offer to his/her clients. Each package has a limited amount of Mw and its price is decided by the aggregator. The clients choose how much to buy from each package for minimizing their total expenses. So, the solutions are Generalized Nash equilibrium points that depend on the prices of the packages decided by the aggregator, who wants to maximizes his/her profits. This is an example of a single-leader-multi-followers games with common constraints. In this paper we discuss on possible formulations of this kind of models. We obtain the properties that fulfill the solutions for a large class of problems, i.e. those that are satisfied for almost all perturbation of the objective functions and the constraints. Possible numerical solution approaches are considered.

16:00-17:30 Session 8I: Invited session : Optimization in car-sharing systems : Probabilistic Models and Analysis, organized by Hanene Mohamed
Location: A1.128
16:00
Mean Field Analysis of an Incentive Policy on the Regulation of Free-Floating Car-Sharing Systems

ABSTRACT. In this talk we present a load-balancing algorithm for a closed stochastic network with two zones with different demands. The algorithm is motivated by an incentive algorithm for redistribution of cars in a large-scale car-sharing system. The service area is divided into two zones. When cars stay too long in the low-demand zone, users are encouraged to pick them up and return them in the high-demand zone. The zones are divided in cells called stations. The cars are the network customers. We show that the mean-field limit solution of an ODE gives the large scale distribution of the station state in both clusters for this incentive policy in a discrete Markovian framework. An equilibrium point of this ODE is characterized via the invariant measure of a random walk in the quarter-plane. The proportion of empty and saturated stations measures how the system is balanced. Finally, we present numerical experiments illustrating the impact of the incentive policy. Our study shows that the incentive policy is useful when the high-demand zone is experiencing a shortage of cars, but that saturation must be avoided, especially when the high-demand zone is small.

16:30
A Model for Dimensioning Ressources in a Car-Sharing System with Booking in Advance and Cancellation

ABSTRACT. In station-based car-sharing systems where users pick up a car and had to return it in the same station, the parking place is reserved. As to the car, the user can reserve it in advance for a time slot, with the option of cancelling. The issue is to evaluate the number of cars needed to meet the demand in the event of cancellation. As a first approximation, when the demand is high, we assume that a user will pick up a car wherever there is one available for the time slot. We therefore consider a stochastic model in which C cars (servers) are available to users (customers) arriving at rate  . When the user arrives, if a car is available for the entire time slot in the future, a reservation is made for that slot. If not, the request is rejected. Before picking up the car, the reservation can be cancelled by releasing the time slot. The user can also return the car before the end of the reservation period. This is equivalent to abandoning the service. It is reasonable to assume that interarrival times, pre-booking times, booking slots, reneging times and abandonment times are respectively i.i.d with a general distribution, and an exponential distribution for inter-arrival times. The number of reservations is difficult to study when C is finite. If C is infinite, this process, generalizing the number of customers in a M=G=1 queue can be expressed as the number of points of a Poisson point process in some set of dimension 5. We obtain its fluid limit and fluctuations when the arrival rate becomes large. The fluid limit is stricly increasing converging to an explicit Cmax when t is large giving the number of This result generalizes a result for M=G=1 queue by Fricker and Jaibi[]. The proof is based on the theory of Poisson point processes, in particular martingales associated to them, and convergence theorems. If C is finite, simulations show that there is a phase transition between two regimes, a saturated one if Cmax = C and a non-saturated one otherwise. The long time value of the fluid limit is Cmax in the non-saturated regime, while in the saturated regime, the fluid limit does not reach C. This long time value is not known and the object of future work. This work was carried out during Florian Verdier’s internship at INRIA and Polytechnique Montreal. Paper in preparation. References [1] C. Fricker and M. R. Jaïbi, On the fluid limit of the M=G=1 queue, Queueing Systems, 56, 255–265, 2007.

17:00
Large Scale Analysis of a Stochastic Model for Free-Floating Car-Sharing
PRESENTER: Alessia Rigonat

ABSTRACT. Bike-sharing and car-sharing systems are an ecological alternative for urban transportation. In particular in car-sharing systems, free-floating is becoming increasingly popular, but the imbalance problem shows the need for stochastic modelling and analysis. We propose a new stochastic model suited for free-floating. Indeed, it is commonly admitted that a free-floating system can be modelled as a station-based system, by dividing the service area into small ge- ographical zones. However, in a station-based system, the stations have a fixed capacity. In contrast, the parking spaces available in free-floating car-sharing systems are those left by other users of the public space. The dynamics of car-sharing systems interact with those of private cars enlightening two different time scales. Assuming that private cars are more numerous, we have a fast process for the number of private cars in each zone and a slow process for the number of shared cars. A stochastic averaging principle governs the free-floating cars behavior and must be managed in the mean-field analysis. A coupling argument under an additional technical assump- tion is used. A phase transition with explicit threshold is exhibited between a non-saturated regime where free-floating cars can always be parked and a saturated one where free-floating cars do not find an available parking space with positive probability. This probability is com- pletely determined by the more numerous private cars which implies that the operator cannot act on the proportion of saturated areas, those with no parking spaces. This allows to solve the dimensioning problem.

16:00-17:30 Session 8J: Graphs and Scheduling
Location: A1.140
16:00
Lot Sizing, Job Scheduling and Energy Management

ABSTRACT. We propose here a generic framework for the joint handling of a lot sizing and a job scheduling process, with application for the management of the interaction of a platform for renewable energy and a set of customers.

16:30
Minimizing total completion time and makespan for a multi-scenario bi-criteria parallel machine scheduling problem
PRESENTER: Xiechen Zhang

ABSTRACT. Multi-criteria scheduling problems face enormous challenges under uncertainty in academic and industrial fields. Despite the significant interest in practice, this area is still underinvestigated theoretically. In this study, we investigate two new parallel machine scheduling problems to minimize makespan and total completion time, respectively, in which the number of scenarios can be infinite. The complexities of the problems are analyzed. An exact and a $(1+\epsilon)$-approximation algorithms are developed respectively for total completion time and makespan minimization.

Then, we address a novel bi-objective parallel machine scheduling problem with infinite scenarios. Based on the obtained results for the above single objective problem, the complexity of the problem is discussed, and two approximation algorithms are designed to find approximate Pareto sets with distinct performance guarantees (for a Pareto optimal set for multi-objective optimization problems under uncertainty defined in the literature~\cite{engau2020}). We also consider some special cases and show that the problem can be transformed into a bi-objective scheduling problem without scenarios for those cases.

17:00
The No-Meet Matroid
PRESENTER: Walid Ben-Ameur

ABSTRACT. Consider a subset of vertices $S$ of a directed network and assume $|S|$ vehicles move on the network: they start from $S$ (one vehicle at each vertex of $S$) and at every step they move simultaneously to a neighbour vertex: we study if they can do this infinitely many times without ever meeting at any vertex (no-meet condition).\\ Meeting problems on graphs have gained their interest previously. Various results have been obtained in the area of merging or meeting of two or more tokens (or vehicles / robots in the literature). In particular, one can refer to a (deterministic) rendezvous problem where two given robots should meet at some point, given a set of instructions (see for example, ~\cite{DessmarkFKP06}). The meeting time for random walks and an arbitrary number of tokens (for connected undirected graphs) has been also evaluated ~\cite{BshoutyHW99}. We study the contrary problem, \emph{no-meet}, which turns out to be connected to matroid theory ~\cite{oxley}:\\ We show that the subsets that meet the no-meet condition constitute the independent sets of a matroid. We prove that the independence oracle is polynomial-time and provide a more efficient algorithm to compute the size of a basis, namely polynomial algorithms to decide whether a subset has the no-meet condition and to calculate a largest subset that has the no-meet condition. We also report several connections with other matroid classes. For example, we show that the class of no-meet matroids strictly contains transversal matroids while it is strictly contained inside gammoids. \\ Then we focus on the orientation problem of the undirected edges of a mixed graph to either maximize or minimize the rank of a subset. Some NP-hardness results and inapproximability results are proved in the general case. Polynomial-time algorithms are described for subsets of size $1$.