previous day
all days

View: session overviewtalk overview

09:00-10:30 Session 10A: Mean field games and applications, I (invited session organized by Guilherme Mazanti and Laurent Pfeiffer)
Location: Amphi I
Fine Properties of the Value Function in Mean-Field Optimal Control

ABSTRACT. In this talk, I will present some of the results of our recent paper \cite{BF2021} with H. Frankowska. Therein, following the seminal paper \cite{CF1991}, we investigate some of the fine regularity and structure properties of the value function associated to a mean-field optimal control problem of Mayer type, formulated in the Wasserstein space of probability measures.

After motivating the study of this class of problems and briefly positioning the underlying models with respect to the literature, I will shortly review some necessary tools of optimal transport theory and mean-field control. Then, I shall present two families of results concerning the regularity properties of the value function. First, I will show that when the data of the optimal control problem is sufficiently regular, the value function is \textit{semiconcave} with respect to both variables. In this case, the semiconcavity with respect to the measure variable is understood in the sense of the intrinsic geometry of Wasserstein spaces, following the monograph \cite{AGS2008}. Then, I will establish the so-called \textit{sensitivity relations} which link the costates of the Pontryagin maximum principle to adequate superdifferentials of the value function. If time permits, I will end the talk by discussing relevant applications of these results in control theory.

A Mean field game model for the evolution of cities

ABSTRACT. We propose a (toy) MFG model for the evolution of residents and firms densities, coupled both by labour market equilibrium conditions and competition for land use (congestion). This results in a system of two Hamilton- Jacobi-Bellman and two Fokker-Planck equations with a new form of coupling related to optimal transport. This MFG has a convex potential which enables us to find weak solutions by a variational approach. In the case of quadratic Hamiltonians, the problem can be reformulated in Lagrangian terms and solved numerically by an IPFP/Sinkhorn- like scheme

Reinforcement learning method for Mean Field Games

ABSTRACT. We will present recent advances on learning methods for computing solutions of mean field games. Theoretical and empirical results concerning algorithms such as Fictitious Play or Online Mirror Descent will be discussed. Numerical experiments in different fields such as flocking, crowd modeling, predator prey or vehicle routing will be presented.

09:00-10:30 Session 10B: Flexibility management in power systems (invited session organized by Nadia Oudjane and Cheng Wan)
Location: Amphi II
Optimal Distributed Energy Resource Coordination: A Hierarchical Decomposition Method Based on Distribution Locational Marginal Costs

ABSTRACT. In this work, we consider the day-ahead operational planning problem of a radial distribution network hosting Distributed Energy Resources (DERs) including rooftop solar and storage-like loads, such as electric vehicles. We present a novel hierarchical decomposition method that is based on a centralized AC Optimal Power Flow (AC OPF) problem interacting iteratively with self-dispatching DER problems adapting to real and reactive power Distribution Locational Marginal Costs. We illustrate the applicability and tractability of the proposed method on an actual distribution feeder, while modeling the full complexity of spatiotemporal DER capabilities and preferences, and accounting for instances of non-exact AC OPF convex relaxations. We show that the proposed method achieves optimal Grid-DER coordination, by successively improving feasible AC OPF solutions, and discovers spatiotemporally varying marginal costs in distribution networks that are key to optimal DER scheduling by modeling losses, ampacity and voltage congestion, and, most importantly, dynamic asset degradation.

Model Predictive Control Framework for Congestion Management with Large Batteries in Sub-Transmission Grid

ABSTRACT. RTE is building and will put into operation 3 large battery storage systems in 2021/2022 (10MW/20MWh). These batteries, together with intermittent renewable generation curtailment and line switching, will be used to manage congestions in 3 small subtransmission zones (63kV or 90kV). A local controller sends orders to the battery, to power plants and switches every 15 seconds, using all the Flexibility offered by permanent and emergency ratings, including Dynamic Line Rating when available. Controller's decision algorithm is Model Predictive Control: every 15 secondes, DC approximation model of the grid is computed based on latests real time measurements; then a Mixed Integer Programming model is built, taking into account delays of actions. This local controller does not have any forecast and is not be able to manage preventive actions, so a higher level scheduler will be in charge of security analysis (N-1 analysis), battery preventive actions, pre-discharging the battery for forthcoming grid congestions.

Approximate Nash equilibria in large nonconvex aggregative games

ABSTRACT. This paper shows the existence of $\mathcal{O}(\frac{1}{n^\gamma})$-Nash equilibria in $n$-player noncooperative sum-aggregative games where the players' cost functions depend only on their own action and the average of all players' actions, is lower semicontinuous in the former, while $\gamma$-H\"{o}lder continuous in the latter. Neither the action sets nor the cost functions need to be convex. For an important class of sum-aggregative games which includes congestion games with $\gamma$ being 1, a gradient-proximal algorithm is used to construct an $\mathcal{O}(\frac{1}{n})$-Nash equilibria with at most $\mathcal{O}(n^3)$ iterations. These results are applied to a numerical example of demand-side management of the electricity system. The asymptotic performance of the algorithm is illustrated when $n$ tends to infinity.

09:00-10:30 Session 10C: Flexibility and pricing in electricity system models (invited session organized by Olivier Beaude)
Location: A1.134
Long-term planning in the electricity system: introduction of a few key modeling aspects

ABSTRACT. Long-term planning of the electricity system is a vast research field, including among others the class of Generation Expansion Planning (GEP) models. In a rapidly changing context, with a greater variability and uncertainty due to the significant increase of renewable energy capacities and the ambition of making part of the consumption « flexible » (i.e., directly controllable by a system operator or incentivized based on the system need), multiple key modelling questions arise. An introduction to a few of them, related to current research efforts at EDF R&D, will be made: the use of clustering methods to decrease the size of considered GEP optimization problems; the « proper » representation of an aggregate of flexible individual consumers; the integration of local – mostly network-based – constraints.

Decision-making Oriented Clustering: Application to Pricing and Power Consumption Scheduling

ABSTRACT. Data clustering is an instrumental tool in the area of energy resource management. One problem with conventional clustering is that it does not take the final use of the clustered data into account, which may lead to a very suboptimal use of energy or computational resources. When clustered data are used by a decision-making entity, it turns out that significant gains can be obtained by tailoring the clustering scheme to the final task performed by the decision-making entity. The key to having good final performance is to automatically extract the important attributes of the data space that are inherently relevant to the subsequent decision-making entity, and partition the data space based on these attributes instead of partitioning the data space based on predefined conventional metrics. For this purpose, we formulate the framework of decision-making oriented clustering and propose an algorithm providing a decision-based partition of the data space and good representative decisions. By applying this novel framework and algorithm to a typical problem of real-time pricing and that of power consumption scheduling, we obtain several insightful analytical results such as the expression of the best representative price profiles for real-time pricing and a very significant reduction in terms of required clusters to perform power consumption scheduling as shown by our simulations.

Network Games Equilibrium Computation: Duality Extension and Coordination

ABSTRACT. We formulate a generic network game as a generalized Nash equilibrium problem. Relying on normalized Nash equilibrium as solution concept, we provide a parametrized proximal algorithm to span many equilibrium points [1]. Complexifying the setting, we consider an information structure in which the agents in the network can withhold some local information from sensitive data, resulting in private coupling constraints. The convergence of the algorithm and deviations in the players’ strategies at equilibrium are formally analyzed. In addition, duality theory extension enables to use the algorithm to coordinate the agents through a fully distributed pricing mechanism, on one specific equilibrium with desirable properties at the system level (economic efficiency, fairness, etc.). To that purpose, the game is recast as a hierarchical decomposition problem in the same spirit as in [3], and a procedure is detailed to compute the equilibrium that minimizes a secondary cost function capturing system level properties. Finally, applications are presented to a) peer-to-peer energy trading [2], b) Transmission-Distribution System Operators markets for flexibility procurement [4].


[1]  H. Le Cadre, Y. Mou, and H. Höschle, Parameterized Inexact-ADMM based Coordina- tion Games: A Normalized Nash Equilibrium Approach, European Journal of Operational Research, 296(2):696–716, 2022.

[2]  H. Le Cadre, P. Jacquot, C. Wan, and C. Alasseur, Peer-to-Peer Electricity Market Anal- ysis: From Variational to Generalized Nash Equilibrium, European Journal of Operational Research, 282(2):753–771, 2020.

[3]  L. Pavel, An Extension of Duality to a Game-Theoretic Framework, Automatica, 43(2):226– 237, 2007.

[4]  A. Sanjab, H. Le Cadre, and Y. Mou, TSO-DSOs Stable Cost Allocation for the Joint Procurement of Flexibility: A Cooperative Game Approach, arXiv Preprint, 2021.


09:00-10:30 Session 10D: Optimization, learning and data sciences
Location: A1.116
Linearly-constrained Linear Quadratic Regulator from the viewpoint of kernel methods

ABSTRACT. Where does machine learning meet optimal control? We show in this talk that linearly controlled trajectory spaces are vector-valued reproducing Hilbert kernel spaces when equipped with the scalar product corresponding to the quadratic cost. The associated LQ kernel is related to the inverse of the Riccati matrix \cite{aubin2020Riccati} and to the controllability Gramian \cite{aubin2020hard_control}. This kernel allows to deal by a simple \emph{representation theorem} with the difficult case of state constraints, in the affine case for linear-quadratic (LQ) optimal control problems with varying time \cite{aubin2020hard_control}. Numerically, this allows an exact continuous-time solution of path planning problems. We will therefore present a new link between kernel methods and optimal control.

Factored Model-based RL for Tandem Queues Resource Allocation

ABSTRACT. Managing the energy consumption and Quality of Service (QoS) in data-centers is a crucial task nowadays. Indeed, these centers are usually oversized and cause over-consumption representing a significant part of global electricity consumption. Turning On and Off virtual machines is a usual way to reduce the energy consumption while guaranteeing a given QoS. We consider three-tier network architecture modeled with two physical nodes in tandem where an autonomous agent controls the number of active resources on both nodes. We analyse the learning of auto-scaling strategies in order to optimise both performance and energy consumption of the whole system. We consider reinforcement learning (RL) [1] techniques to address the learning of optimal policies. We find in [2] a listing of RL methods for auto-scaling policy search in cloud scenarios. We observe that almost all applied methods are model-free. Our intention is to consider and compare structural model-based RL methods to see if it can indeed accelerate convergence, but also to tackle curse of dimensionality when considering large scale cloud scenarios. For that purpose we want to take benefit of the environment structure in our model and methodology to accelerate speed of policy learning. We consider factored MDP framework [3] to understand local relational structure between environment variables. After modeling the tandem queue system in a factored manner, we consider two paradigms: one first approach where the controller knows the statistics of the system and can apply MDP algorithms (Value Iteration) and a second approach where the controller does not know the dynamics of the environment, yet has to consider reinforcement learning methods. In the first approach, we compare classical MDP solution such as Value Iteration with a factored version called Factored Value Iteration [4]. For the reinforcement learning approach we develop a Factored MDP online method based on Factored Rmax algorithm and compare the solution to MDP online and model-free Q-Learning. We demonstrate the gain of such factored approach (convergence, memory implementation) on numerical experiments with arbitrary cloud scenarios.

Tropical linear regression and low-rank approximation --- a first step in tropical data analysis

ABSTRACT. Tropical data arise naturally in many areas, such as control theory, phylogenetic analysis, machine learning, economics, and so on. However, many fundamental problems still deserve further investigations and more powerful tools need to be developed. In this talk, as a first step in tropical data analysis, we would like to introduce two useful models, namely tropical linear regression and tropical low-rank approximation.

More precisely, for a collection V of finitely many points, the tropical linear regression problem is to find a best tropical hyperplane approximation of V. We will establish a strong duality theorem, showing the above distance coincides with the maximal radius of a Hilbert's ball contained in the tropical polyhedron spanned by V. Algorithmically speaking, this regression problem is polynomial-time equivalent to mean payoff game. As an application, we illustrate our results by solving an inverse problem from auction theory.

Another important tool we will study is tropical low-rank approximation. We will systematically discuss the relations among different notions of rank in the tropical setting. In particular, we will reveal a close relation between tropical linear regression and best rank-2 approximation, which provides us an efficient algorithm for finding a best rank-2 matrix approximation for a given matrix.

09:00-10:30 Session 10E: Filtering, surveillance and simulation
Location: A1.122
A low-cost second-order multi-object tracking filter for WAMI surveillance

ABSTRACT. This paper that describes a second-order multi-object tracking filter for tracking objects in wide-area motion imagery (WAMI) applications, built on top of a recently proposed second-order computationally efficient multi-object filter that provides first and second-order information on the number of targets. The filters described in this document consist of multi-target tracking algorithms that accurately estimate the number of objects along with its dispersion (second-order factorial cumulant), in addition to the object states, in tracking scenarios affected by false alarms, misdetections and noise. We incorporate the following extensions to the cumulant-based filter for WAMI applications: 1 ) incorporation of different strategies to deal with environments where objects are poorly distinguishable, 2 ) a labeling process for objects from stochastic populations via the theory of marked point processes, 3 ) a principled track manager to work around track persistence limitations of PHD filters for complex scenarios. The method shows that the resulting filter is adequate to tracking solutions based on a full-probabilistic Bayesian approach to visual tracking. This report describes the extensions made to the filter and demonstrates results that are competitive in performance to state-of-the-art methods.

Cross-entropic optimization of landmarks selection for efficient navigation in passive estimation context

ABSTRACT. Passive target estimation is a widely investigated problem of practical interest for which particle filters represent a popular class of methods. An adaptation of the Laplace Particle Filter is proposed for angle-only navigation by means of landmarks. In this specific context, a high number of aiding landmarks or features could be hard to handle in terms of computational cost. This contribution addresses specifically the key issue of optimizing the landmarks choice along navigation steps in order to maximize the navigation efficiency. In its general form, this is a partially observable decision process optimization, which is known to be a difficult problem. But there are approaches based on the Posterior Cram\`er Rao Bound (PCRB) which are a good compromise in terms of efficiency for moderate difficulty. However, PCRB-based criteria are generally non-convex and are not well suited for generic mathematical approaches. We propose here an optimisation approach based on the cross-entropy algorithm (CE). We already applied CE in combination with Dynamic Bayesian Networks as sampling family to partially observable decision processes. In this contribution, more efficiency is achieved by means of the PCRB, and besides we also introduce some refinements in the CE update stage in order to improve convergence.

Stochastic simulation for discrete event systems based on maintenance optimization

ABSTRACT. Industrial systems are subject to faults and failures on their components, which can lead to the unavailability of the systems and increase costs due to interventions. A suitable maintenance strategy is therefore a good solution so to reduce such costs, and also to increase the availability of the system. Combination of different kinds of maintenance policies on components of a system can be a good solution. Nevertheless, it has to be finely analyzed, so to search the optimal maintenance strategies on the system, according to specified criteria (e.g. availability, cost, etc.). In this publication, we illustrate how the combination of a simulation tool, based on stochastic discrete event systems, and an optimization algorithm can be used to find (one of) the best strategy of maintenance. We propose a solution that optimizes system availability using an evolutionary algorithm. A stochastic simulator performs calculations according to parameters provided by an optimization algorithm, which plans preventive maintenance schedules. The optimization algorithm provides the optimum maintenance scenario defined by the kind of maintenance to apply and the suitable schedules. The experiments show that the simulation based optimization algorithm gives more flexibility to the decision maker.

09:00-10:30 Session 10F: Electric and Efficient Mobility
Location: A1.128
Management of electric vehicles fleet by decomposition method approach

ABSTRACT. The objective of this work was to build a model to optimize dimensions of a fleet of electric vehicles and dimensions of electric chargers dedicated by the vehicle. Fleet must satisfy a demand for transport of passengers. Trajects satisfying demand were modeled and relocalization trajects were also added. Objective function includes charging cost, maintenance and investment cost vehicles, traject duration for passengers, and investment costs of chargers. Because of the size of the problem, decomposition approach was used.

Shared autonomous electric vehicles transport scheduling with charging considerations: optimization-based planning approaches

ABSTRACT. Car-sharing services are becoming increasingly popular as a result of the ubiquitous development of the communication technologies that allow the easy exchange of information required for these systems to work. The adoption of autonomous driving technology for providing car-sharing services is expected to further accelerate the widespread of this transport model. Autonomous driving means that vehicles can move to pick up customers autonomously without the need to rest for long periods and without the need for parking slots. Studies have shown that car-sharing services using autonomous driving can significantly improve fuel efficiency and decrease the need for parking spaces in cities. Moreover, as autonomous vehicles are expected to operate with electric power, they have the potential to significantly decrease greenhouse-gas emissions in the transport sector. The increased electrification of the transport system due to the adoption of shared autonomous transportation has significant implications for the electric power systems. In addition to efficient passenger transport, shared autonomous electric vehicles (SAEVs) provide a significant opportunity to act as moving batteries and to add electricity storage to the grid that would counterbalance the intermittence of renewable energy sources, such as wind and solar. If correctly managed, SAEVs can contribute to power grid flexibility, reduction of peak load and in providing ancillary services such as frequency regulation. This would enable major cost savings for both the transportation and energy systems, enable a higher renewable energy integration and production and offer overall more resilient systems capable of withstanding and recovering from disruptions. However, a poor coordination between the two systems, in particular with regards to the electricity charging schedule of the SAEVs, may compromise both the grid stability and passenger travel times. In this research, we investigate the potential interaction between SAEV and the power grid in terms of charge scheduling and ability to supply operating reserve. To achieve this, we propose an integrated modeling and optimization framework to co-optimize the charging schedule taking into account the SAEV transport demand and charging requirement. We show how a variety of modeling approaches can be used to address this problem, each with a set of advantages and limitations. Our results show the importance of the coordination between the electric power system and the transportation system in terms of reducing costs, satisfying travel demand and ensuring the robust operations of both systems. This is particularly for systems with that are projected to highly depend on electrified mobility.

Synchronizing Energy Production and Vehicle Routing

ABSTRACT. We deal here with the synchronization of solar hydrogen production by a micro-plant, and the consumption of this production by a fleet of autonomous vehicles in charge of internal logistic tasks. More precisely, we consider 1 vehicle, with limited tank capacity, which must visit stations within some time horizon. Travelling times and related energy amount are known and so the vehicle manager must route this vehicle and schedule refueling transactions, which induce a detour by micro-plant. This micro-plant produces H2 in situ according to time dependent costs and production ratio, and stores it inside a tank with limited capacity. Then resulting Synchronized Energy Production&Consumption Problem, consists in computing the route G of the vehicle, and scheduling both refueling transactions and H2 production, while meeting production/consumption constraints and minimizing some global costs.

We first handle this problem while supposing G fixed and deterministic production ratio. We propose for this a centralized synchronized dynamic Programming Scheme, pseudo time-polynomial in the sense of polynomial approximation (PTAS). Next we adopt a collaborative point of view and decompose according to different possible collaborative schemes. Next we deal with the uncertainty issue, through a notion of climatic scenario, whose purpose is to capture correlations. Finally, we consider G as part of the problem, and design a bi-level resolution scheme which makes the vehicle endorse the leading role.

09:00-10:30 Session 10G: Mixed Integer Programming for Applications
Location: A1.133
Mixed Integer Nonlinear Approaches for the Satellite Constellation Design Problem

ABSTRACT. In this paper, we address the satellite constellation design problem, i.e., we aim to compute the minimal number of satellites (and, incidentally, their 3D placements) in order to observe a given region on the Earth. The hard constraints of the problem is related to the so-called revisiting time: the Earth region must be observed at least once in a time interval of given width. The observed Earth region is discretized as a set of pixels, i.e., points on the Earth surface represented by 3D coordinates. [1] already proposed an approach based on Binary Linear Programming (BILP) formulation optimizing only one variable; in our case, we want to optimize simultaneously the 4 variables corresponding to the keplerian orbital parameters. Other approaches proposed in literature mainly concern genetic algorithm methodology, see for instance [2, 3] that optimize 4 variables. Moreover, to the best of our knowledge, this paper represents the first attempt to address satellite constellation design problem though Mixed Integer Nonlinear Programming (MINLP). We define two MINLP formulations: the first one is based on a direct mathematical definition of pixel observability within a given revisiting time; in the second formulation, we introduce a set of indicator variables, which indicate if a satellite observes a pixel at a given time-stamp. In this latter formulation we aim to minimize the number of active indicator variables, enforcing observability of all the pixels within the revising time. We furthermore discretize the possible positions of satellites so that we obtain a large-scale Mixed-Integer Linear Programming (MILP) problem. Finally, we conclude by suggesting several insights toward a (mat)heuristic levering on the second formulation.

References [1] H. W. Lee, S. Shimizu, S. Yoshikawa, and K. Ho, Satellite Constellation Pattern Optimiza- tion for Complex Regional Coverage, Journal of Spacecraft and Rockets, 57(6):1309–1327, 2020. [2] N. Hitomi, and D. Selva, Constellation optimization using an evolutionary algorithm with a variable-length chromosome, IEEE Aerospace Conference, 1–12, 2018. [3] T. Savitri, Y. Kim, S. Jo, and H. Bang, Satellite constellation orbit design optimization with combined genetic algorithm and semianalytical approach, International Journal of Aerospace Engineering, 2017:1235692, 2017.

Efficient Design of a Military Multi-technology Network through MILP

ABSTRACT. In a military theater of operations, a set of mobile units with different communication technologies (5G, UHF, VHF, etc.) are deployed and have to be able to communicate between each other and to a military base. The network of units and the base is connected through a set of communication relays of multiple technologies that have to cover the base and the maximum number of units. Communication relays can be embarked on HAPS which are innovative systems (e.g., drones, balloons) deployed at a fixed, high altitude.

To maximize unit coverage over time, we introduce and study the Network Planning Problem (NPP). This problem consists in deciding the deployment of the HAPS in a set of positions and deciding the placement of one or more relays in each deployed HAPS within the limits of its capacities. Through their relays, the HAPS must form a connected network with the base.

We propose a MILP formulation for the NPP with multi-commodity flow variables. In order to strengthen the model, we introduce valid and symmetry-breaking inequalities. We also perform an experimental study and develop a visualization tool. Preliminary tests show that the difficulty of solving the problem is related to the number of positions and the range of communication devices.

MIP formulations for Energy Optimization in Multi-Hop Wireless Networks

ABSTRACT. Network coding improves the quality of service of wireless networks. In this work, we consider the energy-efficient routing with network coding. In the routing process, network coding is employed to reduce the number of data transmissions and yield less energy consumption. This study leads to a new combinatorial optimization problem, namely wireless unsplittable multi-commodity flow with network coding (wUMCFC).

We solve the wUMCFC problem by the integer programming approach. We propose several compact mixed-integer formulations, including one mixed-Boolean quadratic programming (MBQP) formulation, and two mixed-integer linear programming formulations: edge linearization formulation and edge balance formulation. The edge linearization formulation is a direct linearization of the MBQP formulation via McCormick envelopes of bilinear terms, while we prove that the edge balance formulation is stronger than the edge linearization formulation. To scale up with large networks, we derive a path-based reformulation of the edge balance formulation and propose an exact approach based on the branch-and-price algorithm.

The branch-and-price based solver wUMCFC {} is open source. Our computational tests show the speedup of wUMCFC over CPLEX on large networks. On the simulated instances, the employment of network coding reduces 18.3% of routing energy cost in average.

09:30-10:30 Session 11: Evolutionary and black-box algorithms, I
Location: A1.139
A First Theoretical Analysis of the Non-Dominated Sorting Genetic Algorithm II (NSGA-II)

ABSTRACT. The non-dominated sorting genetic algorithm II (NSGA-II)~\cite{DebPAM02} is the most intensively used multi-objective evolutionary algorithm (MOEAs) in real-world applications~\cite{ZhouQLZSZ11}. However, in contrast to several simple MOEAs analyzed also via mathematical means~\cite{Brockhoff11bookchapter}, no such study exists for the NSGA-II so far. In this work, we show that mathematical runtime analyses are feasible also for the NSGA-II. As particular results, we prove that with a population size larger than the Pareto front size by a constant factor, the NSGA-II with two classic mutation operators and three different ways to select the parents satisfies the same asymptotic runtime guarantees as the SEMO and GSEMO algorithms on the basic OneMinMax and LeadingOnesTrailingZeroes benchmark functions. However, if the population size is only equal to the size of the Pareto front, then the NSGA-II cannot efficiently compute the full Pareto front (for an exponential number of iterations, the population will always miss a constant fraction of the Pareto front). Our experiments confirm the above findings.

This work is part of the PGMO-funded project \emph{Understanding and Developing Evolutionary Algorithms via Mathematical Runtime Analyses} (PI: Benjamin Doerr). The work of the second author was done during her internship at the LIX (funded by the PGMO project).

Simulated Annealing: a Review and a New Scheme

ABSTRACT. Finding the global minimum of a non-convex optimization problem is a notoriously hard task appearing in numerous applications, from signal processing to machine learning. Simulated annealing (SA) is a family of stochastic optimization methods where an artificial temperature controls the exploration of the search space while preserving convergence to the global minima. SA is efficient, easy to implement, and theoretically sound, but suffers from a slow convergence rate. The purpose of this work is two-fold. First, we provide a comprehensive overview on SA and its accelerated variants. Second, we propose a novel SA scheme called curious simulated annealing, combining the assets of two recent acceleration strategies. Theoretical guarantees of this algorithm are provided. Its performance with respect to existing methods is illustrated on practical examples.

10:30-11:00Coffee Break
11:00-12:30 Session 12A: Mean field games and applications, II (invited session organized by Guilherme Mazanti and Laurent Pfeiffer)
Location: Amphi I
MFG Systems with Interactions though the Law of Controls: Existence, Uniqueness and Numerical Simulations

ABSTRACT. This talk is dedicated to a class of games in which agents may interact through their law of states and controls; we use the terminology mean field games of controls (MFGC for short) to refer to this class of games. We first give existence and uniqueness results under various sets of assumptions. We introduce a new structural condition, namely that the optimal dynamics depends upon the law of controls in a Lipschitz way, with a Lipschitz constant smaller than one. In this case, we give several existence results on the solutions of the MFGC system, and one uniqueness result under a short-time horizon assumption. Then, under a monotonicity assumption on the interactions through the law of controls (which can be interpreted as the adaptation of the Lasry-Lions monotonicity condition to the MFGC system), we prove existence and uniqueness of the solution of the MFGC system. Finally, numerical simulations of a model of crowd motion are presented, in which an agent is more likely to go in the mainstream direction (which is in contradiction with the above-mentioned monotonicity condition). Original behaviors of the agents are observed leading to non-uniqueness or queue formation.

Generalized conditional gradient and learning in potential mean field games

ABSTRACT. We apply the generalized conditional gradient algorithm to potential mean field games and we show its well-posedeness. It turns out that this method can be interpreted as a learning method called fictitious play. More precisely, each step of the generalized conditional gradient method amounts to compute the best-response of the representative agent, for a predicted value of the coupling terms of the game. We show that for the learning sequence δk = 2/(k + 2), the potential cost converges in O(1/k), the exploitability and the variables of the problem (distribution, congestion, price, value function and control terms) converge in O(1/ k), for specific norms.

Non-smooth Minimal-Time Mean Field Game with State Constraints

ABSTRACT. Introduced in 2006, mean field games (MFG) are models for games with a continuum of interacting agents in which players or agents are indistinguishable and individually neglectable. In this talk, we consider a \emph{non-smooth minimal-time mean field game with state constraints}~\cite{Saeed-Guilherme-state constraint2021}. It has been inspired by crowd motion in which the agents are evolving in a certain domain. Due to the state constraints, which can be seen as hedges, fences, walls, etc., characterization of optimal control is a main issue and not easy to establish. This issue is due to the fact that in such a situation, the value function is not necessarily semiconcave, compared to free state models in which one has indeed the semiconcavity under some suitable smoothness assumptions. Even though there are some techniques in the literature dealing with state constraints, they do not seem to apply for the minimal-time MFG. The main contribution to the topic is therefore to find a new approach which does not rely on semiconcavity. For this purpose, we use a penalization technique in order to apply the Pontryagin Maximum Principle and obtain its consequences for our optimal control problem. Another novelty of this approach is its compatibility with less regularity assumption on the dynamics of the agents.

This is a joint work with Guilherme Mazanti (DISCO Team, Inria Saclay--\^Ile-de-France, L2S--CentraleSup\'elec)

11:00-12:30 Session 12B: Recent advances in Optimization of Energy Markets (invited session organized by Mathias Staudigl)
Location: Amphi II
A distributed algorithm for economic dispatch of prosumer networks with grid constraints

ABSTRACT. Future electrical distribution grids are envisioned as complex large-scale networks of prosumers (energy consumers with storage and/or production capabilities). In the context of economic dispatch, these prosumers optimize their operating points based on self-interested objectives. Nevertheless, these decisions must be accepted by a network operator, whose responsibility is to ensure a safe operation of the grid.

In this work, we formulate the interaction among prosumers plus the network operator in an economic dispatch problem as a generalized aggregative game, i.e., a set of inter-dependent optimization problems, each of which has a cost function and constraints that depend on the aggregated decisions of prosumers. We consider that each prosumer may have dispatchable generation and storage units as well as peer-to-peer and grid trading capabilities. In addition, we also impose a linear approximation of power-flow equations and operational limits as grid constraints handled by the network operator.

Furthermore, we design a distributed mechanism with convergence guarantee to an economically efficient and operationally safe configuration (i.e., a variational GNE) based on the preconditioned proximal point method. Our algorithm requires each prosumer to communicate with its trading partners and the network operator that handles grid constraints and aggregative variables. We perform numerical experiments on the IEEE 37-bus system to show the importance of having grid constraints and active participation of prosumers as well as to show the scalability of our algorithm.

On Convex Lower-Level Black-Box Constraints in Bilevel Optimization with an Application to Gas Market Models with Chance Constraints
PRESENTER: Martin Schmidt

ABSTRACT. Bilevel optimization is an increasingly important tool to model hierarchical decision making. However, the ability of modeling such settings makes bilevel problems hard to solve in theory and practice. In this paper, we add on the general difficulty of this class of problems by further incorporating convex black-box constraints in the lower level. For this setup, we develop a cutting-plane algorithm that computes approximate bilevel-feasible points. We apply this method to a bilevel model of the European gas market in which we use a joint chance constraint to model uncertain loads. Since the chance constraint is not available in closed form, this fits into the black-box setting studied before. For the applied model, we use further problem-specific insights to derive bounds on the objective value of the bilevel problem. By doing so, we are able to show that we solve the application problem to approximate global optimality. In our numerical case study we are thus able to evaluate the welfare sensitivity in dependence of the achieved safety level of uncertain load coverage.

Distribution locational marginal prices via random block-coordinate descent methods

ABSTRACT. The modern distribution network is undergoing an unprecedented reformation, thanks to the increased deployment of distributed energy resources (DERs) in the form of distributed generators, distributed storage, aggregators managing fleets of electric vehicles or groups of prosumers (e.g. owners of photo-voltaic panels).

While the potential benefits of DERs are globally accepted, ill-managed operations of them could lead to significant failures of the desired outcome. Specifically, wrong control strategies could lead to drastic voltage fluctuations and supply-demand imbalances.

With this in mind, a replication of the transmission-level locational marginal prices is much desired. The locational marginal price is the marginal cost of supplying an additional unit of demand at a bus. The price signals differ spatially and temporally, and are used to incentivize DER-generators and consumers to balance supply-demand, support voltage, and minimize system losses. The necessary extension to the distribution network- in the form of distribution locational marginal prices (DLMP)- has received significant attention in the current debate on how to integrate distributed resources into the entire energy system.

Distributed Multi-agent optimization techniques are one powerful approach to enable this envisioned transition towards a fully integrated energy system. A key question in this approach is the effective computation of DLMPs in a fully distributed way with minimal communication and computational complexity. In this paper we present a computational framework, derived from a new randomized block-coordinate descent algorithm for linearly constrained optimization problems, generalizing the state-of-the-art. Preliminary numerical studies on a 15-bus test network show promising performance of our randomized block coordinate descent strategy.

11:00-12:30 Session 12C: Mathematical programming for the nuclear outage planning problem, current challenges (invited session organized by Rodolphe Griset)
Location: A1.116
Nuclear outage planning problem: medium term formulation and constraint programming

ABSTRACT. French electricity producer EDF owns and operates 56 nuclear reactors. The company must manage and optimize the timing of the outages dedicated to maintenance and recharging of the units. The outage plannings of the various units are linked to each other by local and coupling constraints and must be compatible with the supply/demand equilibrium constraints. To comprehend this complexity, the team in charge of the operational process at EDF uses a decision-support tool developed and maintained by EDF R&D.

The goal of this tool is to propose movements of outages to comply with a viable planning. The proposed movements must be as few and short as possible ; all movements must be justified. The tool is based on a local and heuristic research algorithm. Violated constraints are dealt with one at a time and movements of outages are proposed to solve them accordingly. This method makes it easy to trace back each movement to the violation that led to it. However, this sequential resolution can result into a sub-optimal solution, or even no solution at all. To avoid such an outcome, EDF R&D is working on an alternative method using Constraint Programming (CP) which has always been a promising approach to solve our problem.

By treating the scheduling problem in a global way, the CP method is less likely to return a sub-optimal solution. In addition, as the research is exhaustive, should no solution exist, the method can provide proof of it with certainty. EDF R&D worked the last couple of years to develop a tool that traduces the operational nuclear outage planning problem into a Constraint Satisfaction Problem and solves it with the open source constraint toolkit Gecode. Now that the method has demonstrated its ability to deal with the problem, the next step is to bring explainability to the results (tracing back the movements to the violations, explaining the reasons why the problem has no solution, etc.)

Nuclear outage planning problem: New challenges on long term horizons

ABSTRACT. The nuclear outage planning problem is of significant importance to EDF as the nuclear production capacity influences the energy management chain at different time horizons. Indeed, the optimisation of the nuclear capacity performed every month on a medium horizon, corresponding to 3 to 5 years, is an essential input for the optimisation of other sources performed at shorter horizons. This problem is quite challenging given the specific operating constraints of nuclear units, the stochasticity of both the demand and non-nuclear units availability. This problem has been submitted to the research community through the euro/roadef challenge 2010. Over the last 15 years, EDF has explored several optimization technics to solve it. The best results had been optained by heuristics, currently used in operation process, but also constraint programming to recover from infeasibilities and mixed-integer programming to take stochasticity into account. On the other side, EDF has to solve this problem on a long time horizon such as 20 to 30 years to take structuring decisions. To solve those long term problems simplifications have to be made in constraints taken into account given the size of the problem.

In this talk, we will link the medium and long term nuclear outage scheduling problem. Emphasis will be made on the current challenges of the long term problem. An adaptation of the model introduced in to solve this kind of problem through column generation will be present.

Planning Robustness, Research Topics on Optimization under Uncertainty

ABSTRACT. We present the lines of research that will be investigated around the stability of the schedules of power plants. Improving robustness and stability of EDF's nuclear plans is of growing importance for EDF given the uncertainty associated with production fleet availability (nuclear outage durations, nuclear availability and growth of renewable sources). Finding schedules that satisfy all technical and safety constraints that are economically sound and robust to random events combines theoretical difficulties (multi-stage combinatorial optimization problems under uncertainty) and practical issues (lack of precise yet concise formulations and researcher-friendly data sets). This work is motivated by our past experience on the topic. To facilitate the development of advanced methodologies required to solve such problems, we aim at (i) designing stable scheduling models that encompass the outage planning problem, (ii) designing and developing a protocol for evaluating the resulting optimization algorithms in the multi-stage setting (iii) creating a researcher-friendly yet realistic data set and (iv) providing a numerical benchmark of algorithms suitable for approaching these problems. Those algorithms will be inspired by pragmatical approaches and recent developments in these subjects.

11:00-12:30 Session 12D: Optimal control and differential inclusions
Location: A1.122
Viability and Invariance in a Wasserstein Space

ABSTRACT. Optimal control problems stated in the space of Borel probability measures involve controlled non-local continuity equations whose solutions are curves of probability measures. Such equations appear in probabilistic models describing multi-agent systems. I will provide necessary and sufficient conditions for viability and invariance of proper subsets of the Wasserstein space P2 under controlled continuity equations. Viability means that for every initial condition in the set of constraints having a compact support there exists at least one trajectory of control system starting at it and respecting state constraints forever. Invariance means that every trajectory of control system starting in the set of constraints and having a compact support at the initial time never violates them. To describe these conditions I will introduce notions of contingent cone, proximal normal, directional derivative and Hadamard super/subdifferential on the (merely metric) space P2 in which the linear structure is absent. Results are applied to get existence and uniqueness of continuous solutions to a Hamilton-Jacobi-Bellman equation on the space of probability measures with compact support. These solutions are understood in the viscosity-like sense.

Optimal Shape of Stellarators for Magnetic Confinement Fusion

ABSTRACT. Nuclear fusion is a nuclear reaction involving the use of light nuclei. In order to use nuclear fusion to produce energy, high temperature plasmas must be produced and confined. This is the objective of devices called tokamaks, steel magnetic confinement chambers having the shape of a toroidal solenoid. The magnetic field lines in tokamaks form helicoidal loops, whose twisting allow the particles in the plasma to be confined. Indeed, twisting allows to counter the vertical drift that would otherwise affect particles, in opposite directions for ions and for electrons.

Unfortunately, the twisting of magnetic field requires a current to be induced within the plasma, inducing intermittency, instabilities, and other technological challenges. A possible alternative solution to correct the problems of drift of magnetically confined plasma particles in a torus is to modify the toroidal shape of the device, yielding to the concept of stellarator. This system has the advantage of not requiring plasma current and therefore of being able to operate continuously; but it comes at the cost of more delicate coils (non-planar coils) and of a more important transport.

Despite the promise of very stable steady-state fusion plasmas, stellarator technology also presents significant challenges related to the complex arrangement of magnetic field coils. These magnetic field coils are particularly expensive and especially difficult to design and fabricate due to the complexity of their spatial arrangement.

In this talk, we are interested in the search for the optimal shape of stellarators, i.e., the best coil arrangement (provided that it exists) to confine the plasma. Let us assume that a target magnetic field B_T is known. It is then convenient to define a coil winding surface (CWS) on which the coils will be located. The optimal arrangement of stellarator coils corresponds then to the determination of a surface (the CWS) chosen to guarantee that the magnetic field created by the coils is as close as possible to the target magnetic field B_T. Of course, it is necessary to consider feasibility and manufacturability constraints. We will propose and study several relevant choices of such constraints.

In this talk we will analyze the resulting problem as follows: first we will establish the existence of an optimal shape, using the so-called reach condition; then we will show the shape differentiability of the criterion and provide the expression of the differential in a workable form; finally, we will propose a numerical method and perform simulations of optimal stellarator shapes. We will also discuss the efficiency of our approach with respect to the literature in this area.

Inducing strong convergence of trajectories in dynamical systems associated to monotone inclusions with composite structure

ABSTRACT. Zeros of the sum of a maximally monotone operator with a single-valued monotone one can be obtained as weak limits of trajectories of dynamical systems, for strong convergence demanding hypotheses being imposed. We extend an approach due to Attouch, Cominetti and coauthors, where zeros of maximally monotone operators are obtained as strong limits of trajectories of Tikhonov regularized dynamical systems, to forward-backward and forward-backward-forward dynamical systems whose trajectories strongly converge towards towards zeros of such sums of monotone operators under reasonable assumptions. This talk is based on joint work with Radu Ioan Bo\c{t}, Dennis Meier and Mathias Staudigl.

11:00-12:30 Session 12E: Stochastic Optimization I
Location: A1.128
Stochastic Dynamic Programming for Maritime Operations Planning

ABSTRACT. We consider an industrial planning problem under uncertainty arising in maritime operations. We propose a stochastic multi-stage optimization model and a dynamic programming solving scheme. Results are presented through a visualization tool.

Stochastic Dual Dynamic Programming and Lagrangian decomposition for seasonal storage valuation

ABSTRACT. The increase in the share of renewable energy sources in the power system, especially intermittent ones like solar and wind, brings several challenges. As both energy production and demand vary throughout the year (in winter, for example, there is a reduction in the supply of solar energy while the demand for energy for heating increases), energy storage becomes a relevant factor to take advantage of excess energy produced in certain seasons of the year and respond to increased energy demand in others. An important system for seasonal storage is that formed by cascading hydroelectric reservoirs. The management of such systems is a challenging task which hinges on a good assessment of the future value of stored energy (water) in these systems. In order to assess the value of water, and thus be able to properly manage these reservoir systems, a large-scale multi-stage stochastic problem spanning a year must be solved, where each stage is a weekly unit-commitment problems. Since the unit-commitment problems are non-convex in nature, they make the seasonal storage valuation problem unsuitable for what would otherwise be the most natural approach to solve it, i.e., Stochastic Dual Dynamic Programming. In this work we exploit the natural "convexification" capabilities of Lagrangian relaxation to devise a Stochastic Dual Dynamic Programming approach to the seasonal storage valuation problem where the Lagrangian Dual of the single-stage subproblems is solved (using a Bundle-type approach), which corresponds to solving the "convexified relaxation" of the original problem. This is known to be at least as tight as, and typically strictly tighter than, the standard way to convexify a stochastic MINLP amenable to the Stochastic Dual Dynamic Programming approach, i.e., the continuous relaxation. We perform preliminary experiments, made possible by the use of the SMS++ software framework ( for large-scale problems with multiple nested forms of structure, which show that the tighter "convexified" relaxation corresponding to the use of the Lagrangian relaxation does indeed provide solutions that deliver a more adequate water value.

Adaptive grid refinement for optimization problems with probabilistic/robust (probust) constraints

ABSTRACT. \noindent Probust constraints are joint probabilistic constraints over a continuously indexed random inequality system: \[ \mathbb{P}(g(x,\xi,u)\geq 0\quad\forall u\in U)\geq p, \] where $x$ is a decision vector, $\xi$ is a random vector defined on some probability space $(\Omega,\mathcal{A},\mathbb{P})$, $u$ is a (nonstochastic) unceratin parameter living in an uncertainty set $U$, $g$ is a constraint mapping and $p\in (0,1]$ some probability level. This class of constraints has been introduced in \cite{graheihen} and further studied in \cite{adel,farhenhoem,ackhenper}. It finds potential applications in optimization problems where uncertain parameteres fo stochastic and nonstochastic nature are present simultaneously, e.g. random loads and friction coefficients in gas transport problems \cite{adel,graheihen}. Another major problem class is PDE constrained optimization with probabilistic state constraints uniformly over a space or time domain \cite{farhenhoem}. The robust or semi-infinite nature of the random inequality system inside the probability poses challenges both on the theoretical (nonsmoothness of the probability depending on decisions \cite{ackhenper}) and the algorithmic side. The talk presents an adaptive two-stage grid refinement algorithm for dealing numerically with such constraints along with a convergence result \cite{berheihenschwien}. The algorithm is applied to a simple problem of water reservoir management.

\begin{footnotesize} \begin{thebibliography}{00} \bibitem{adel} D. Adelh{\"u}tte, D. A{\ss}mann, T. Gonz{\'a}lez Grand{\'o}n, M. Gugat, H. Heitsch, R. Henrion, F. Liers, S. Nitsche, R. Schultz, M. Stingl, and D. Wintergerst, Joint model of probabilistic-robust (probust) constraints with application to gas network optimization, to appear in: Vietnam Journal of Mathematics, appeared online. \bibitem{berheihenschwien} H. Berthold, H. Heitsch, R. Henrion and J. Schwientek, On the algorithmic solution of optimization problems subject to probabilistic/robust (probust) constraints, WIAS-Preprint 2835, Weierstrass Institute Berlin, 2021. \bibitem{farhenhoem} M.H. Farshbaf-Shaker, R. Henrion and D. H{\"o}mberg, Properties of chance constraints in infinite dimensions with an application to PDE constrained optimization, Set-Valued and Variational Analysis, 26 (2018) 821-841. \bibitem{graheihen} T. Gonz{\'a}lez Grand{\'o}n, H. Heitsch and R. Henrion, A joint model of probabilistic /robust constraints for gas transport management in stationary networks, Computational Management Science, 14 (2017) 443-460. \bibitem{ackhenper} W. Van Ackooij, R. Henrion and P. P{\'e}rez-Aros, Generalized gradients for probabilistic/robust (probust) constraints, Optimization, 69 (2020) 1451-1479. \end{thebibliography} \end{footnotesize}

11:00-12:30 Session 12F: Network Design
Location: A1.133
Approximability of Robust Network Designin Undirected Graphs

ABSTRACT. In this talk we will present results published in [1] on the approximability of the robustnetwork design problem in undirected graphs. Given the dynamic nature of traffic in telecom-munication networks, we investigate the variant of the network design problem where we haveto determine the capacity to reserve on each link so that each demand vector belonging to apolyhedral set can be routed. The objective is either to minimize congestion or a linear cost.Routing is assumed to be fractional and dynamic (i.e., dependent on the current traffic vector).We first prove that the robust network design problem with minimum congestion cannot beapproximated within any constant factor. Then, using the ETH conjecture, we get aΩ(lognlog logn)lower bound for the approximability of this problem. This implies that theO(logn)approxi-mation ratio of the oblivious routing established by Räcke in 2008 [4] is tight. Using Lagrangerelaxation, we obtain a new proof of theO(logn)approximation. An important consequenceof the Lagrange-based reduction and our inapproximability results is that the robust networkdesign problem with linear reservation cost cannot be approximated within any constant ratio.This answers a long-standing open question of Chekuri (2007) [2]. We also give another proofof the result of Goyal et al. (2009) [3] stating that the optimal linear cost under static routingcan beΩ(logn)more expensive than the cost obtained under dynamic routing.

Optimizing over Nonlinear Networks with Duality Cuts

ABSTRACT. The optimal design or operation of distribution networks of commodities like water, gas, or electricity, involves the discrete choice of a static or dynamic network topology while ensuring the existence of nonconvex potential-flow relations. From strong duality conditions observed on these relations, we show how to derive cuts for the MINLP formulation of such problems.

On the Equivalence between the Blocker and the Interdiction Problem Applied to the Maximum Flow

ABSTRACT. The Maximum Flow Blocker Problem (MFBP) is a bi-level optimization problem where the leader is a blocker problem and the follower is a Maximum Flow problem. The MFBP consists in determining a minimum weight subset of arcs, called interdicted arcs, to be removed from the graph such that the remaining maximum flow is no larger than a given threshold. The non-interdicted graph refers to the one induced by the set of arcs which are not interdicted. In telecommunication networks, a major issue consists in detecting and facing anomalies. In this context, the MFBP has a key role in resilience analysis, since it gives the maximum number of anomalies that can occur such that the network continue to be operational. To the best of our knowledge, the MFBP has not been addressed in the literature but a closely related problem, called the Maximum Flow Interdiction Problem (MFIP), has been largely studied. This problem consists in finding a set of interdicted arcs that respect a given budget and such that the maximum flow, remaining in the non-interdicted graph, is minimized. We refer the interested reader to [1] and [2] for a complete overview of the MFIP. In [1], the author proposes a compact integer formulation for solving the problem. In [2], the authors show that any solution of the MFIP is contained in a cut of the graph. We prove that this result also holds for the MFBP. More generally, we show that the two problems, MFBP and MFIP, are equivalent. In other words, one can transform an instance of the MFBP to one of the MFIP and vice versa. Using this relation, we develop a compact integer formulation for the MFBP and obtain some complexity results.

11:00-12:30 Session 12G: Bilevel Programming – Applications to Energy Management I (invited session organized by Miguel Anjos and Luce Brotcorne)
Location: A1.134
Market Integration of Behind-the-Meter Residential Energy Storage

ABSTRACT. A new business opportunity beckons with the emergence of prosumers. We propose an innovative business model to harness the potential of aggregating behind-the-meter residential storage in which the aggregator compensates participants for using their storage system on an on-demand basis. A bilevel optimization model is developed to evaluate the potential of this proposed business model and determine the optimal compensation scheme for the participants. A Texas case study using real-world data confirms the year-round profitability of the model, showing that participants could earn on average nearly $1500 per year, and the aggregator could make an average profit of nearly $2000 per participant annually. The case study confirms that the proposed business model has potential, and that the main driver for a successful implementation is a suitable setting of the compensation paid to participants for using their energy storage system.

Intracity Placement of Charging Stations to Maximise Electric Vehicle Adoption

ABSTRACT. We present a new model for finding the placement of electric vehicle (EV) charging stations across a multi-year period which maximises EV adoption. This work is an extension of that done by Anjos et. al., and allows for a granular modelling of user characteristics and the value they attribute to the characteristics of the charging stations. This is achieved via the use of discrete choice models, with the users choosing a primary recharging method among the public charging stations or home charging. In case none of the above options are sufficiently attractive for the user, they can select the opt-out option, which indicates they do not purchase an EV. Instead of embedding an analytical probability model in the formulation, we adopt the approach proposed by Paneque et. al., which uses simulation and pre-computes error terms for each option available to users for a given number of scenarios. This results in a bilevel model, with the upper level placing charging stations and the users in the lower level selecting the option which maximises their utility. Alternatively, under the assumption that the charging stations are uncapacitated (in terms of the number of users who can select each option of the discrete choice model), we can use the pre-computed error terms to calculate the users covered by each charging station. This allows for an efficient maximum covering model, which can find optimal or near-optimal solutions in an intracity context much more effectively than the bilevel formulation.

Hierarchical Coupled Driving-and-Charging Model of Electric Vehicles, Stations and Grid Operators

ABSTRACT. The decisions of operators from both the transportation and the electrical systems are coupled due to Electric Vehicle (EV) users' actions. Thus, decision-making requires a model of several interdependent operators and of EV users' both driving and charging behaviors. Such a model is suggested for the electrical system in the context of commuting, which has a typical trilevel structure. At the lower level of the model, a congestion game between different types of vehicles gives which driving paths and charging stations (or hubs) commuters choose, depending on travel duration and energy consumption costs. At the middle level, a Charging Service Operator sets the charging prices at the hubs to maximize the difference between EV charging revenues and electricity supplying costs. These costs directly depend on the supplying contract chosen by the Electrical Network Operator at the upper level of the model, whose goal is to reduce grid costs. This trilevel optimization problem is solved using an optimistic iterative algorithm and simulated annealing. The sensitivity of this trilevel model to exogenous parameters such as the EV penetration and an incentive from a transportation operator is illustrated on realistic urban networks. This model is compared to a standard bilevel model in the literature (only one operator).

11:00-12:30 Session 12H: Semidefinite programming and optimal power flow
Location: A1.139
ACOPF: Nonsmooth optimization to improve the computation of SDP bounds

ABSTRACT. The Alternating-Current Optimal Power Flow (ACOPF) problem models the optimization of power dispatch in an AC electrical network. Obtaining global optimality certificates for large-scale ACOPF instances remains a challenge. In the quest for global optimality certificates, the semidefinite programming (SDP) relaxation of the ACOPF problem is a major asset since it is known to produce tight lower bounds. To improve the scalability of the SDP relaxation, state-of-the-art approaches exploit the sparse structure of the power grid by using a clique decomposition technique. Despite this advance, the clique-based SDP relaxation remains difficult to solve for large-scale instances: numerical instabilities may arise when solving this convex optimization problem. These difficulties cause two issues (i) this could reduce the accuracy of the calculation of the SDP bound (ii) obtaining a solution with non-zero dual feasibility errors implies that the calculated relaxation value is not a certified.

This work tackles both issues with an original approach. We reformulate the Lagrangian dual of the ACOPF, whose value equals the value of the SDP relaxation, as a concave maximization problem with the following properties: (i) it is unconstrained (ii) the objective function is partially separable. Based on this new formulation, we present how to obtain a certified lower bound from any dual vector, whether feasible or not in the classical dual SDP problem. Our new formulation is solved with a tailored polyhedral bundle method that exploits the sparse structure of the problem. We use this algorithm as a post-processing step, after solving the SDP relaxation with the state-of-the-art commercial interior point solver MOSEK. For many instances from the PGLib-OPF library, this post-processing significantly reduces the computed duality gap.

A Hierarchical Control Approach for Power LossMinimization and Optimal Power Flow within aMeshed DC Microgrid

ABSTRACT. This work expands a hierarchical control scheme to account for cost minimization and battery scheduling within a meshed DC microgrid together with the power loss minimization of the central power distribution network. The control strategy is divided into three layers: i) the high level solves a continuous-time optimization problem which minimizes the DC-bus power loss and the electricity cost from the external grid power purchase through the use of differential flatness with B-splines parametrization; ii) the middle level employs a tracking Model Predictive Control(MPC) method which mitigates the discrepancies among the optimal and the actual profiles; iii)the low level controller handles the switching operations within the converters. The proposed approach is validated in simulation for a meshed DC microgrid system under realistic scenarios.

Clique Merging Algorithms to Solve SemidefiniteRelaxations of Optimal Power Flow Problems

ABSTRACT. Semidefinite Programming (SDP) is a powerful technique to compute tight lower bounds for Optimal Power Flow (OPF) problems. Even using clique decomposition techniques, semidefinite relaxations are still computationally demanding. However, there are many different clique decompositions for the same SDP problem and they are not equivalent in terms of computation time. In this paper, we propose a new strategy to compute efficient clique decompositions with a clique merging heuristic. This heuristic is based on two different estimates of the computational burden of an SDP problem: the size of the problem and an estimation of a per-iteration cost for a state-of-the-art interior-point algorithm. We compare our strategy with other algorithms on MATPOWER instances and we show a significant decrease in solver time. In the last part of the talk we present recent developments on how to incorporate machine learning techniques to automatically identify effective clique decompositions.

12:30-14:15Lunch Break
14:15-15:45 Session 13A: Games, equilibria and applications
Location: A1.139
Privacy Impact on Generalized Nash Equilibrium in Peer-to-Peer Electricity Market

ABSTRACT. We consider a peer-to-peer electricity market, where agents hold private information that they might not want to share. The problem is modeled as a noncooperative communication game, which takes the form of a Generalized Nash Equilibrium Problem, where the agents determine their randomized reports to share with the other market players, while anticipating the form of the peer-to-peer market equilibrium. In the noncooperative game, each agent decides on the deterministic and random parts of the report, such that (a) the distance between the deterministic part of the report and the truthful private information is bounded and (b) the expectation of the privacy loss random variable is bounded. This allows each agent to change her privacy level. We characterize the equilibrium of the game, prove the uniqueness of the Variational Equilibrium and provide a closed form expression of the privacy price. We illustrate the results numerically on the 14-bus IEEE network.

A Bi-level Optimization Problem to Estimate Enzyme Catalytic Rates from a Proteomic-constrained Metabolic Model

ABSTRACT. Despite their crucial role to understand cellular metabolism, most of the enzyme catalytic rates are still unknown or suffer large uncertainties. Here, we propose a method to estimate catalytic rates from a metabolic network and proteomic data. We search catalytic rate values which maximize enzyme usages, given that metabolic fluxes should maximize the growth rate under the enzyme allocation constraints. This leads to solve a bi-level optimization problem which has the following structure : the contraints of the upper problem involve the value function of the lower problem which is linear. To solve it, we use the Karush-Kuhn-Tucker (KKT) conditions for the lower problem which allows to obtain a Mathematical Program with Complementarity Constraints (MPCC) for the bi-level problem. Optimality conditions then involve complementary constraints issued from complementary slackness in the KKT conditions that are non-convex and generically fail to satisfy the usual assumptions (such as constraint qualification). The mathematical program is thus solved using a relaxation method. The validity of the proposed method is highlighted with simulated data from a coarse-grained model.

Coupled queueing and charging game model with power capacity optimization

ABSTRACT. With the rise of Electric Vehicles (EVs), the demand for parking spots equipped with plugging devices in the charging stations (CSs) is tremendously increasing. This motivates us to study how to improve the Quality of Charging Service (QoX) at these stations. CSs are characterized by their number of parking spots, the maximum power that can be delivered by the station to an EV at one charging point and the power scheduling between the charging points. The latter is based on a modified Processor Sharing (PS) rule, i.e EVs benefit from the maximum power when the number of charging EVs (charging points in use) is sufficiently low, otherwise the total power is uniformly shared. In our model, EVs arrive at the CS according to a Poisson process with a random quantity of energy needed to fully charge their battery. An EV can occupy a charging point without consuming power: each EV has a random parking time and leaves the parking spot only when its parking time expires. We model the arrival of EVs into two CSs. Each new arriving EV determines strategically which CS to join based on a QoX criterion which is the expected probability to leave the chosen station with full battery level. The number of charging EVs at each instant follows a Markov Process and the latter expected probability can be explicitly determined depending on parameters of the system [1]. The strategic decision problem is studied using a queueing game framework [2] and properties of the equilibrium of the game are obtained as well as bounds on the Price of Anarchy (PoA)[3]. Finally our results are illustrated on a particular use-case in which a Charging Station Operator (CSO) is managing both CSs. With a limited total quantity of power for the two CSs, the CSO decides how to share it between the two CSs in order to maximize the QoX at the equilibrium of the game between EVs. An analytical solution of the optimization problem is given. Some numerical illustrations corresponding to a realistic case are provided in order to deepen understand insights of the stochastic model proposed, as the PoA. The results of this work have been published in [4].

References [1] A. Aveklouris, M. Vlasiou, and B. Zwart,Bounds and limit theorems for a layered queueingmodel in electric vehicle charging, Queueing Systems, 93:83–137, 2019. [2] R. Hassin, and M. Haviv,To queue or not to queue: Equilibrium behavior in queueingsystems, Kluwer, 2003. [3] T. Roughgarden,Selfish Routing and the Price of Anarchy, MIT Press, 2006. [4] A. Dupont, Y. Hayel, T. Jiménez, O. Beaude, C. Wan,Coupled queueing and charging gamemodel with energy capacity optimization, in proc. of ASMTA, 2021

14:15-15:45 Session 13B: Optimal control and applications, I (invited session organized by Jean-Baptiste Caillau andYacine Chitour)
Location: Amphi I
On the asymptotic behaviour of the value function in optimal control problems

ABSTRACT. The turnpike phenomenon stipulates that the solution of an optimal control problem in large time, remains essentially close to a steady-state of the dynamics, itself being the optimal solution of an associated static optimal control problem (see [2]). We use this property to propose an asymptotic expansion of the value function as the horizon T is large. Firstly, we prove the result in the linear quadratic case, using the Hamiltonian structure of the Pontryagin Maximum Principle extremal equations and some basic results of LQ theory. Then, based on some results obtained in [3], the result is generalized to a large class of nonlinear systems provided they satisfy the (strict) dissipativity property.

On the Convergence of Sequential Convex Programming for Non-Linear Optimal Control

ABSTRACT. Sequential Convex Programming (SCP) has recently gained significant popularity as an effective method for solving complex non-convex optimal control problems in aerospace and robotics. However, the theoretical analysis of SCP has received limited attention, and it is often restricted to either deterministic or finite-dimensional formulations. In this talk, we investigate conditions for the convergence of SCP when applied to solve a wide range of non-deterministic optimal control problems. As a future research direction, we plan to extend this analysis to optimal control problems whose dynamics are infinite-dimensional.

Extremal determinants of Sturm-Liouville operators on the circle

ABSTRACT. In a recent work [1], the functional determinant of Sturm-Liouville operators on an interval has been studied. We continue this study for such operators on the circle [2, 3]. In the simplest case where the potential is essentially bounded, some structure of the extremal potentials persist despite the change of topology.

This project is supported by the FMJH Program PGMO and EDF-Thales-Orange (extdet PGMO grant).


[1] Aldana, C.; Caillau, J.-B.; Freitas, P. Maximal determinants of Schrödinger operators. J. Ec. polytech. Math. 7 (2020), 803–829.

[2] On the Determinant of Elliptic Differential and Finite Difference Operators in Vector Bundles over S1. Burghelea, D.; Friedlander, L.; Kappeler, T. Commun. Math. Phys. 138, 1–18 (1991).

[3] Forman, R. Determinants, Finite-Difference Operators and Boundary Value Problems. Commun. Math. Phys. 147, 485–526 (1992).

14:15-15:45 Session 13C: Game theory and beyond
Location: A1.116
EPTAS for Stable Allocations in Matching Games

ABSTRACT. Gale-Shapley introduced a matching problem between two sets of agents where each agent on one side has a preference over the agents of the other side and proved, algorithmically, the existence of a pairwise stable matching (i.e. no uncoupled pair can be better off by matching). Shapley-Shubik, Demange-Gale, and many others extended the model by allowing monetary transfers. In this paper, we study an extension [1] where matched couples obtain their payoffs as the outcome of a strategic game and more particularly a solution concept that combines Gale-Shapley pairwise stability with a constrained Nash equilibrium notion (no player can increase its payoff by playing a different strategy without violating the participation constraint of the partner). Whenever all couples play zero-sum matrix games, strictly competitive bi-matrix games, or infinitely repeated bi-matrix games, we can prove that a modification of some algorithms in [1] converge to an $\varepsilon$-stable allocation in at most $O(\frac{1}{\varepsilon})$ steps where each step is polynomial (linear with respect to the number of players and polynomial of degree at most 5 with respect to the number of pure actions per player).

Refined approachability algorithms and application to regret minimization with global costs

ABSTRACT. Blackwell's approachability is a framework where two players, the Decision Maker and the Environment, play a repeated game with vector-valued payoffs. The goal of the Decision Maker is to make the average payoff converge to a given set called the target. When this is indeed possible, simple algorithms which guarantee the convergence are known. This abstract tool was successfully used for the construction of optimal strategies in various repeated games, but also found several applications in online learning. By extending an approach proposed by (Abernethy et al., 2011), we construct and analyze a class of Follow the Regularized Leader algorithms (FTRL) for Blackwell's approachability which are able to minimize not only the Euclidean distance to the target set (as it is often the case in the context of Blackwell's approachability) but a wide range of distance-like quantities. This flexibility enables us to apply these algorithms to closely minimize the quantity of interest in various online learning problems. In particular, for regret minimization with ℓp global costs, we obtain the first bounds with explicit dependence in p and the dimension d.

Algorithmic aspect of core nonemptiness and core stability

ABSTRACT. In 1953, von Neumann and Morgenstern developed the concept of stable sets as a solution for cooperative games. Fifteen years later, Gillies popularized the concept of the core, which is a convex polytope when nonempty. In the next decade, Bondareva and Shapley formulated independently a theorem describing a necessary and sufficient condition for the nonemptiness of the core, using the mathematical objects of minimal balanced collections.

We start our investigations of the core by implementing Peleg’s (1965) inductive method for generating the minimal balanced collections as a computer program, and then, an algorithm that checks if a game admits a nonempty core or not.

In 2021, Grabisch and Sudhölter formulated a theorem describing a necessary and suf- ficient condition for a game to admit a stable core, using several mathematical objects and concepts such as nested balancedness, balanced subsets which generalize balanced collections, exact and vital coalitions, etc.

In order to reformulate the aforementioned theorem as an algorithm, a set of coalitions has to be found that, among other conditions, determines the core of the game. We study core stability, geometric properties of the core, and, in particular, such core determining set of coalitions. Furthermore, we describe a procedure for checking whether a subset of a set of nonnegative real numbers is balanced. Finally, we implement the algorithm as a computer program that allows to check if an arbitrary game admits a stable core or not.

14:15-15:45 Session 13D: Variational analysis
Location: A1.122
Characterizing the error bound properties of functions in metrizable topological vector spaces

ABSTRACT. The notion of error bound is a widely used concept in applied mathematics and thereby has received a lot of attention in the last years and decades. Indeed, it plays a key role in areas including variational analysis, mathematical programming, convergence properties of algorithms, sensitivity analysis, designing solution methods for non-convex quadratic problems, penalty functions, optimality conditions, weak sharp minima, stability and well-posedness of solutions, (sub)regularity and calmness of set-valued mappings, and subdifferential calculus. In this regard, Hoffman’s estimation, as the starting point of the theory of error bounds, is very important and plays a considerable role in optimization. In this presentation, we will provide some sufficient criteria under which the function f, acting either between metrizable topological vector spaces or between metrizable subsets of some topological vector spaces, satisfies the error bound property at a point $\bar{x}$. Then, we will discuss the Hoffman estimation and obtain some results for the estimate of the distance to the set of solutions to a system of linear equalities. Some applications of this presentation are illustrated by some examples. The talk is based on an extension of [1]. References [1] M. Abassi, M. Théra, Strongly regular points of mappings, Fixed Point Theory Algorithms Sci. Eng. p. Paper No. 14 (2021), URL s13663- 021- 00699- z

Equilibrium Theory in Riesz Spaces under Imperfect Competition

ABSTRACT. In this paper, we consider an economy with infinitely many commodities and market failures such as increasing returns to scale and external effects or other regarding preferences. The commodity space is a Riesz space with possibly no quasi-interior points in the positive cone in order to include most of the relevant commodity spaces in economics. We extend previous definitions of the marginal pricing rule as the polar of a new tangent cone to the production set at a point of its boundary. In order to prove the existence of a marginal pricing equilibria, we investigate the necessary extension of the so-called properness condition on non-convex technologies to deal with the emptiness of the quasi-interior of the positive cone. With this contribution, we obtain the same level of generality as the contribution on competitive equilibrium with convex production sets like in \cite{florenzano_production_2001,Podczeck1996}.


\bibitem{florenzano_production_2001}Florenzano, M., Marakulin, M.: Production equilibria in vector lattices, Economic Theory, Springer, 17, 577-598 (2001).

\bibitem{Podczeck1996}Podczeck, K.: Equilibria in vector lattices without ordered preferences or uniform properness, Journal of Mathematical Economics, Elsevier, 25, 465-485 (1996).


Constant Along Primal Rays Conjugacies

ABSTRACT. The so-called l0 pseudonorm counts the number of nonzero components of a vector. It is standard in sparse optimization problems. However, as it is a discontinuous and nonconvex function, the l0 pseudonorm cannot be satisfactorily handled with the Fenchel conjugacy.

In this talk, we present the Euclidean Capra-conjugacy, which is suitable for the l0 pseudonorm, as this latter is "convex" in the sense of generalized convexity (equal to its biconjugate). We immediately derive a convex factorization property (the l0 pseudonorm coincides, on the unit sphere, with a convex lsc function) and variational formulations for the l0 pseudonorm.

In a second part, we provide different extensions: the above properties hold true for a class of conjugacies depending on strictly-orthant monotonic norms (including the Euclidean norm); they hold true for nondecreasing functions of the support (including the l0 pseudonorm); more generally, we will show how Capra-conjugacies are suitable to provide convex lower bounds for zero-homogeneous functions; we will also point out how to tackle the rank matrix function.

Finally, we present mathematical expressions of the Capra-subdifferential of the l0 pseudonorm, and graphical representations. This opens the way for possible suitable algorithms that we discuss.

14:15-15:45 Session 13E: Stochastic optimization II
Location: A1.128
Risk-Averse Stochastic Programming and Distributionally Robust Optimization Via Operator Splitting

ABSTRACT. This work deals with a broad class of convex optimization problems under uncertainty. The approach is to pose the original problem as one of finding a zero of the sum of two appropriate monotone operators, which is solved by the celebrated Douglas-Rachford splitting method. The resulting algorithm, suitable for risk-averse stochastic programs and distributionally robust optimization with fixed support, separates the random cost mapping from the risk function composing the problem's objective. Such a separation is exploited to compute iterates by alternating projections onto different convex sets. Scenario subproblems, free from the risk function and thus parallelizable, are projections onto the cost mappings' epigraphs. The risk function is handled in an independent and dedicated step consisting of evaluating its proximal mapping that, in many important cases, amounts to projecting onto a certain ambiguity set. Variables get updated by straightforward projections on subspaces through independent computations for the various scenarios. The investigated approach enjoys significant flexibility and opens the way to handle, in a single algorithm, several classes of risk measures and ambiguity sets.

Parametric stochastic optimization for day-ahead and intraday co-management of a power unit

ABSTRACT. We consider renewable power units equipped with a battery and engaged in day-ahead load scheduling. In this context, the unit manager must submit a day-ahead power production profile prior to every operating day, and is engaged to deliver power accordingly. During the operating day, the unit manager is charged penalties if the delivered power differs from the submitted pro- file. First, we model the problem of computing the optimal production profile as a parametric multistage stochastic optimization problem. The production profile is modeled as a parameter which affects the value of the intra-day management of the power unit, where the photovoltaic production induces stochasticity. Second, we introduce parametric value functions for solving the problem. Under convexity and differentiability assumptions, we are able to compute the gradients of these value functions with respect to the parameter. When the differentiability assumption breaks, we propose two approximation methods. One is based on a smooth approx- imation with the Moreau envelope, the other one is based on a polyhedral approximation with the SDDP algorithm. We showcase applications in the context of the French non-interconnected power grid and benchmark our method against a Model Predictive Control approach.

Multistage Stochastic Linear Programming and Polyhedral Geometry

ABSTRACT. We show that the multistage linear problem (MSLP) with an arbitrary cost distribution is equivalent to a MSLP on a finite scenario tree. Indeed, be studying the polyhedral structure of MSLP, we show that the cost distribution can be replaced by a discrete cost distribution, taking the expected value of the cost at each cone of a normal fan. In particular, we show that the expected cost-to-go functions are polyhedral and affine on the cells of a chamber complex, which is independent of the cost distribution. This allow us to write a simplex method on the chamber complex to solve 2 stage linear problems. We also give new complexity results.

14:15-15:45 Session 13F: Large scale optimization
Location: A1.133
Rates of Convergences of First-Order Algorithms for Assignment Problems.

ABSTRACT. We compare several accelerated and non-accelerated first-order algorithms for computational optimal transportation or optimal assignment problems. Up to one exception, based on the notion of area convexity, which produces a solution optimal up to an error e in O( log n ||C|| / e) iterations, we find that all up-to-date methods share the same O((√n log n||C|| / e) )⋅n2) complexity (n2 being the cost per iteration) where n is the size of the marginals and ||C||the largest element in the nxn cost matrix. We discuss the merits of smoothing, accelerated methods, and line-search approaches. We also introduce a Bregman kernel/prox functions, which makes the algorithms stable and improves the quality of the solutions at a very low cost.

Majorization-Minimization algorithms : New convergence results in a stochastic setting

ABSTRACT. Many problems in image processing or in machine learning can be formulated as the minimization of a loss function F whose nature is not totally deterministic. In the differentiable context, such situation arises when the gradient ∇F can only be evaluated up to a stochastic error. In this talk, we will focus our attention on the impact of noisy perturbations in the gradient on the convergence of majorization-minimization (MM) approaches. The latter consist of efficient and effective optimization algorithms that benefit from solid theoretical foundations and great practical performance [2, 4]. Our talk presents the convergence analysis for a versatile MM scheme called SABRINA (StochAstic suBspace majoRIzation-miNimization Algorithm) [1], generalizing our previous work [3]. Almost sure convergence results and convergence rate properties are obtained. The practical performances of SABRINA are illustrated by means of several numerical experiments.

FISTA restart using an automatic estimation of the growth parameter

ABSTRACT. In this paper, we propose a novel restart scheme for FISTA (Fast Iterative Shrinking- Threshold Algorithm). This method which is a generalization of Nesterov’s accelerated gradient algorithm is widely used in the field of large convex optimization problems and it provides fast convergence results under a strong convexity assumption. These convergence rates can be extended for weaker hypotheses such as the Łojasiewicz property but it requires prior knowledge on the function of interest. In particular, most of the schemes providing a fast convergence for non-strongly convex functions satisfying a quadratic growth condition involve the growth parameter which is generally not known. Recent works show that restarting FISTA could ensure a fast convergence for this class of functions without requiring any geometry parameter. We improve these restart schemes by providing a better asymptotical convergence rate and by requiring a lower computation cost. We present numerical results emphasizing that our method is efficient especially in terms of computation time.

14:15-15:15 Session 13G: Bilevel Programming – Applications to Energy Management II (invited session organized by Miguel Anjos and Luce Brotcorne)
Location: Amphi II
A Quadratic Regularization for the Multi-Attribute Unit-Demand Envy-Free Pricing Problem

ABSTRACT. We consider a profit-maximizing model for pricing contracts as an extension of the unit-demand envy-free pricing problem: customers aim to choose a contract maximizing their utility based on a reservation price and multiple price coefficients (attributes). Classical approaches suppose that the customers have deterministic utilities; then, the response of each customer is highly sensitive to price since it concentrates on the best offer. To circumvent the intrinsic instability of deterministic models, we introduce a quadratically regularized model of customer's response, which leads to a quadratic program under complementarity constraints (QPCC). This provides an alternative to the classical logit approach, still allowing to robustify the model, while keeping a strong geometrical structure. In particular, we show that the customer's response is governed by a polyhedral complex, in which every polyhedral cell determines a set of contracts which is effectively chosen. Moreover, the deterministic model is recovered as a limit case of the regularized one. We exploit these geometrical properties to develop a pivoting heuristic, which we compare with implicit or non-linear methods from bilevel programming, showing the effectiveness of the approach. Throughout the paper, the electricity provider problem is our guideline, and we present a numerical study on this application case.

Optimal pricing for electric vehicle charging with customer waiting time

ABSTRACT. We propose a bilevel optimization model to determine the optimal pricing for electric vehicle within a public charging station system by taking into account the waiting time of customers. We assume that the locations of the charging station are fixed, and model the waiting time without using classical queuing theory. In the upper level of the bilevel model, the decision maker sets the price of electricity and the amount of energy available at each station. The latter quantity depends on the amount of renewable energy available at each time period. In the lower level, electric vehicle users select a charging station and a time for recharging the vehicle, depending on individual preferences. We present two linear models for this problem and explore how to solve them using mixed integer bilevel optimization methods.

A Multi-Leader-Follower Game for Energy Demand-Side Management
PRESENTER: Didier Aussel

ABSTRACT. A Multi-Leader-Follower Game (MLFG) corresponds to a bilevel problem in which the upper level and the lower level are defined by Nash non cooperative competition among the players acting at the upper level (the leaders) and, at the same time, among the ones acting at the lower level (the followers). MLFGs are known to be complex problems, but they also provide perfect models to describe hierarchical interactions among various actors of real life problems. In this work, we focus on a class of MLFGs modelling the implementation of demand-side management in an electricity market through price incentives, leading to the so-called Bilevel Demand-Side Management problem (BDSM. Our aim is to propose some innovative reformulations/numerical approaches to efficiently tackle this difficult problem. Our methodology is based on the selection of specific Nash equilibria of the lower level through a precise analysis of the intrinsic characteristics of (BDSM).

15:45-16:15Coffee Break
16:15-17:45 Session 14A: Applications in Energy
Location: A1.133
Optimisation du planning de production et de l'approvisionnement en énergie d'un site industriel

ABSTRACT. Nous proposons un modèle stochastique en variables mixtes permettant d'optimiser simultanément le planning de production d'un site industriel ainsi que son approvisionnement en énergie. La difficulté de ce problème réside en deux points : d'une part les variables binaires permettant de considérer les contraintes de production ; d'autre part la stochasticité de l'apport en énergie renouvelable.

Nous commençons par une étude de la version déterministe du problème, où nous nous intéressons à la complexité du problème, à sa convexité et à son paramétrage.

Nous envisageons d'abord deux méthodes classiques pour résoudre ce problème : la programmation dynamique et model predictif control. Ces méthodes requièrent un temps de calcul important. C'est pourquoi nous explorons différentes heuristiques qui exploitent la résolution rapide de la relaxation continue du problème par l'algorithme stochastic dual dynamic programming (SDDP) : une première heuristique de réparation, qui consiste à arrondir de manière intelligente la solution relâchée continue ; une seconde heuristique qui utilise une sous-approximation des fonctions de coûts futurs par les coupes trouvées par SDDP. Enfin, alors que la programmation dynamique décompose un problème à T pas de temps à T problèmes à 1 pas de temps, pour mieux prendre en compte les contraintes d'intégrité, on considère une décomposation par programmation dynamique en problème sur deux pas de temps. Cette idée peut être appliquée à l'heuristique avec des coupes.

Les approches proposées sont numériquement testées sur un problème réel

Stochastic Two-stage Optimisation for the Operation of a Multi-services Virtual Power Plant

ABSTRACT. We present a method to operate virtual power plants (VPP) under uncertainty of energy production. This work is funded by the EU-SysFlex H2020 project which aims at demonstrating the feasibility of operating a multi services VPP. The demonstration focuses on the control of a 12-MW wind farm, a 2.3-MW/1h Battery, and photovoltaic panels to provide frequency support and energy arbitrage. In our study case, the uncertainty comes from the wind farm and photovoltaic (PV) panels production. The VPP needs to take production decisions on energy markets: the bids are made on SPOT and FCR (Frequency Containment Reserve) markets the day before at 12. If the offer is not respected, penalties must be paid or the intraday market (private sale) can be used. The approach chosen is two stage stochastic programming [1] because the MILP formulation is possible and the problem has two levels of decisions. We usually use MILP solvers because they are efficient and optimality is proven. The uncertainty is represented with scenarios of wind generation and PV generation of equal probability. These were created with real time probabilist forecast (quantiles) and copula [2], generated from history to render the time dependency. The first stage variables, which are variables having the same value for each scenario, are the bids on markets in our modelling. The second stage variables are the recourse variables, decisions taken after knowing which scenario occurs, such as the decisions on intraday market. We compare the stochastic schedule with a deterministic schedule and a perfect schedule, which is the one made knowing in advance what will occur. The comparison is made under a financial criteria such as gain and penalties on markets. We use a simulation platform [3] to simulate what would really occur and get more precise financial results.

Hydropower Datasets For SMS++

ABSTRACT. Having realistic data is crucial to adequately test optimisation software, but unfortunately such data is usually not publicly available and/ or cannot be published. In this project cascading hydro system datasets based on hydro systems in Norway were created using only data that is publicly available (online). The datasets are in NetCdf4 format to be easily used with SMS++ uses, but model spreadsheets and Python code make it possible to generate the datasets from Excel. Although the datasets created are based on the structure of "HydroUnitBlock" blocks in SMS++, the data could be useful for research and applications not related to SMS++.

16:15-17:45 Session 14B: Methodological aspects in OR and ML
Location: Amphi II
Learning Structured Approximations of Operations Research Problems

ABSTRACT. The design of algorithms that leverage machine learning alongside combinatorial optimization techniques is a young but thriving area of operations research. If trends emerge, the literature has still not converged on the proper way of combining these two techniques or on the predictor architectures that should be used. We focus on operations research problems for which no efficient algorithms are known, but that are variants of classic problems for which ones efficient algorithm exist. Elaborating on recent contributions that suggest using a machine learning predictor to approximate the variant by the classic problem, we introduce the notion of structured approximation of an operations research problem by another. We provide a generic learning algorithm to fit these approximations. This algorithm requires only instances of the variant in the training set, unlike previous learning algorithms that also require the solution of these instances. Using tools from statistical learning theory, we prove a result showing the convergence speed of the estimator, and deduce an approximation ratio guarantee on the performance of the algorithm obtained for the variant. Numerical experiments on a single machine scheduling and a stochastic vehicle scheduling problem from the literature show that our learning algorithm is competitive with algorithms that have access to optimal solutions, leading to state-of-the-art algorithms for the variant considered.

Linear Lexicographic Optimization and Preferential Bidding System

ABSTRACT. We propose a method based on linear lexicographic optimization for solving the problem of assigning pairings to pilots in an airline using a preferential bidding system. Contrary to what is usually done for this problem, this method does not follow a sequential procedure. It relies instead on an approach that is closer to the standards of mathematical programming: first solve the continuous relaxation; second find a good integer solution.

Solution of Fractional Quadratic Programs on the Simplex and Application to the Eigenvalue Complementarity Problem

ABSTRACT. Fractional programming is a classical topic in optimization with a large number of application areas such as economics, finance, communication, and engineering, and has extensively been studied for several decades [2]. Although much effort has been made to widen the area of applications by introducing generalized models such as the sum-of-ratios problem and its variants, see e.g., [2], Fractional Quadratic Programs (FQP) and Fractional Linear Quadratic Program (FLQP) are still considered to be the most fundamental and important classes of fractional programs. A global optimal solution of the FLQP can be computed by the classical Dinkelbach's method [1]. We introduce an implementation of this algorithm for computing a global maximum of an FLQP on the simplex that employs an efficient Block Principal Pivoting algorithm in each iteration. A new sequential FLQP algorithm is introduced for computing a stationary point of an FQP on the simplex. Global convergence for this algorithm is established. This sequential algorithm is recommended for the solution of the symmetric Eigenvalue Complementarity Problem (EiCP), as this problem is equivalent to the computation of a stationary point of an FQP on the simplex.

Our computational experience indicates that the implementation of Dinkelbach's method for the FLQP and the sequential FLQP algorithm are quite effcient in practice. We extend the sequential FLQP algorithm also for solving the nonsymmetric EiCP. Since this method solves a special Variational Inequality (VI) Problem in each iteration, it can be considered as a sequential VI algorithm. Despite the convergence of this algorithm has yet to be established, preliminary computational experience indicates that the sequential VI algorithm is quite a promising technique for the solution of the nonsymmetric EiCP.

References [1] W. Dinkelbach, On nonlinear fractional programming, Management Science 13 (1967) pp. 492--498. [2] S.C. Fang, D.Y. Gao, R.L. Sheu, W. Xing, Global optimization for a class of fractional programming problems. J Glob Optim 45 (2009) pp. 337--353.

16:15-17:45 Session 14C: Optimal control and applications, II (invited session organized by Jean-Baptiste Caillau and Yacine Chitour)
Location: Amphi I
Worst exponential decay rate for degenerate gradient flows subject to persistent excitation

ABSTRACT. In this paper we estimate the worst rate of exponential decay of degenerate gradient flows $\dot x = -S x$, issued from adaptive control theory \cite{anderson1986}. Under \emph{persistent excitation} assumptions on the positive semi-definite matrix $S$, we provide upper bounds for this rate of decay consistent with previously known lower bounds and analogous stability results for more general classes of persistently excited signals. The strategy of proof consists in relating the worst decay rate to optimal control questions and studying in details their solutions.

As a byproduct of our analysis, we also obtain estimates for the worst $L_2$-gain of the time-varying linear control systems $\dot x=-cc^\top x+u$, where the signal $c$ is \emph{persistently excited}, thus solving an open problem posed by A. Rantzer in 1999, cf.~\cite[Problem~36]{Rantzer1999}.

Optimal control problem on Riemannian manifolds under probability knowledge of the initial condition

ABSTRACT. We consider an optimal control problem on a compact Riemannian manifold M with imper- fect information on the initial state of the system. The lack of information is modeled by a Borel probability measure along which the initial state is distributed. This problem is of fundamental importance both in terms of real world applications and mathematical theory.

Indeed, this layout appears in the modelling of many optimal control problems related to mechanics, robotics or quantum systems. The initial condition is not perfectly known either due to the lack of measurements, errors of measurements, or even due to the nature of the system itself, meaning that the uncertainties are inevitable. As for the theoretical interest, since the optimal control problem with partial information is defined in the 2-Wasserstein space over the Riemannian manifold P2(M), we need to define proper tools in this space to describe the problem.

Similar to optimal control problems with perfect information [2], we introduce the value function of this problem and an associated Hamilton Jacobi Bellman (HJB) equation defined on P2(M). We propose to extend the same techniques commonly used in viscosity theory [1] to the space P2(M) in order to prove that the value function is the unique viscosity solution to the HJB equation. In particular, we want to define a notion of “smooth” solutions to the HJB equation, define the set of test functions for viscosity super and subolutions, prove a local comparison principle that guarantees uniqueness of the viscosity solution and finally prove that the value function verifies a dynamic programming principle that will guarantee existence of the solution.

The main result is that the value function of the problem is the unique viscosity solution to an HJB equation. The notion of viscosity is defined by exploiting the Riemannian-like structure on P2(M).

References [1] Guy Barles, Solutions de viscosité des équations de Hamilton-Jacobi, Collection SMAI, 1994. [2] Martino Bardi and Italo Capuzzo-Dolcetta, Optimal control and viscosity solutions of Hamilton-Jacobi-Bellman equations, Springer Science & Business Media, 2008.

Fuel Optimal Planetary Landing with Thrust Pointing and Glide-slope Constraints

ABSTRACT. The problem of landing a reusable launch vehicle is usually treated as an optimal controlproblem, with an objective of minimizing fuel consumption. It has been shown thanks to Pontryagin Maximum Principle that the optimal bounded control for the unconstrained 3D problemis in the Bang-Off-Bang form, i.e. a period of full thrust, followed by a period off thrust, andthen full-thrust until touchdown. More representative problems with state and control constraints are often considered only numerically, as in [2] for example. Theoretical results on the structure of the control are rarely provided and problems remain open. Yet, leveraging such results, indirect shooting methods can be used to obtain more precise numerical results than using direct methods.We considered the landing problem with a thrust pointing constraint and a glide-slope constraint on the position of the vehicle [2], which may be imposed for practical and safety reasonsas it prevents the vehicle from getting too close to the ground before reaching the target point.The difficulty when applying Pontryagin Maximum Principle taking into account state or controlconstraints is that the adjoint vectors may be discontinuous, have complex expressions and the control may not be defined everywhere on the optimal trajectory.By applying the version of the maximum principle given in [1, Th. 9.5.1], we have studied the variations of the switching function, after clarifying its derivative and showing that it was nondecreasing. Thus, we have deduced that the optimal control is still in a Bang-Off-Bang form,and there can be singular arcs between the Bang arcs. Moreover, if the glide-slope constraint is changed into a positive altitude constraint, we can show that for generic initial conditions there is no singular arc. Finally, for a simplified 2D problem, we have shown that, apart from the final point, only one contact or one boundary with the state constraint can occur.

16:15-17:45 Session 14D: New numerical methods: from PDE to trajectory planning
Location: A1.116
Control of state-constrained McKean-Vlasov equations: application to portfolio selection

ABSTRACT. We consider the control of McKean-Vlasov dynamics (or mean-field control) with probabilistic state constraints. We rely on a level-set approach which provides a representation of the value function of the constrained problem through an unconstrained one with exact penalization and running maximum cost. Our approach is then adapted to the common noise setting. Our work extends (Bokanowski, Picarelli, and Zidani, SIAM J Control Optim 54.5 (2016), pp. 2568–2593) and (Bokanowski, Picarelli, and Zidani, Appl Math Optim 71 (2015), pp. 125–163) to a mean-field setting. In contrast with these works which study almost sure constraints, in the case of probabilistic constraints without common noise we don't require existence of optimal controls for the auxiliary problem. The reformulation as an unconstrained problem is particularly suitable for the numerical resolution of the problem, achieved thanks to an extension of a machine learning algorithm from (Carmona, Laurière, arXiv:1908.01613, 2019). An application focuses on a mean-variance portfolio selection problem with conditional value at risk constraint on the wealth. We are also able to directly solve numerically the primal Markowitz problem in continuous time without relying on duality.

A multilevel fast-marching method
PRESENTER: Shanqing Liu

ABSTRACT. We introduce a numerical method for approximating the solutions for a class of static Hamilton-Jacobi-Bellman equations arising from minimum time state constrained optimal control problems. We are interested in computing the value function and an optimal trajectory from two given points only. We show that this is equivalent to solve the state constrained equation on the subdomain which contains the true optimal trajectories between the two points.

Our algorithm takes advantage of this good property. Instead of finding the optimal trajectories directly, by solving a discretization system in a fine grid, we first approximately find the subdomain containing the optimal trajectories by solving the system in a coarse-grid. Then, we further discretize that subdomain with a fine-grid, and solve on that fine-grid the discretized system. The computation of approximated value functions on each grid is based on the fast-marching method.

We show that using our algorithm, the error estimation for the computation of the value function is as good as the one obtained using the fine grid to discretize the whole domain. Moreover, we show that the number of computation operations as well as the memory allocation needed for an error of "e" is in the order of "(1/e)^d", whereas classical methods need at least "(1/e)^(2d)". Under regularity conditions on the dynamic and value functions, this complexity bound reduces to "(C^d)*log(1/e)".

A novel mixed-integer description within the combined use of MPC and polyhedral potential field

ABSTRACT. We extend the polyhedral sum function notion to the multi-obstacle case and show that the polyhedral support of its piecewise affine representation comes from an associated hyperplane arrangement. We exploit this combinatorial structure to provide equivalent mixed-integer representations and model a repulsive potential term for subsequent use in a Model Predictive Control (MPC) formulation. Hence, our main goal is to exploit the problem structure in order to simplify the end formulation. Specifically, the main improvements are: i) expanding the sum function, as defined for the single-obstacle case in \cite{Huy2020Ocean} for a multi-obstacle environment; ii) exploit the combinatorial structure of the problem (polyhedral shapes induce a hyperplane arrangement) to provide a novel mixed-integer description of the sum function; iii) implement a repulsive potential field component in a mixed-integer MPC scheme in order to solve the navigation problem.

16:15-17:45 Session 14E: Statistics and optimization
Location: A1.128
Distributed Zero-Order Optimization under Adversarial Noise

ABSTRACT. We study the problem of distributed zero-order optimization for a class of strongly convex functions. They are formed by the average of local objectives, associated to different nodes in a prescribed network. We propose a distributed zero-order projected gradient descent algorithm to solve the problem. Exchange of information within the network is permitted only between neighbouring nodes. An important feature of our procedure is that it can query only function values, subject to a general noise model, that does not require zero mean or independent errors. We derive upper bounds for the average cumulative regret and optimization error of the algorithm which highlight the role played by a network connectivity parameter, the number of variables, the noise level, the strong convexity parameter, and smoothness properties of the local objectives. The bounds indicate some key improvements of our method over the state-of-the-art, both in the distributed and standard zero-order optimization settings. We also comment on lower bounds and observe that the dependency over certain function parameters in the bound is nearly optimal.

Méthodes de prediction de vie residuelle sur des données censurés
PRESENTER: Alonso Silva

ABSTRACT. Le cadre d’application de ce travail est l’estimation de durée de vie résiduelle à partir de données de taille moyenne ou de grande taille avec un sous-ensemble (10%-50%) de données censurés à droite. La tâche principale de l’analyse de survie est d’estimer la distribution de probabilité du temps jusqu’à ce qu’un événement d’intérêt se produise. Le phénomène de censure à droite se produit lorsque certains échantillons n’ont pas rencontré l’événement d’intérêt pendant la période de suivi. Nous limitons notre étude à considérer des variables explicatives endogènes et exogènes tant qu’elles sont constantes dans le temps. Les méthodes traditionnelles de régression et classification ne sont pas adaptées pour pouvoir inclure à la fois les aspects événementiels et temporels. Nous proposons de comparer des modèles paramétriques, semi-paramétriques, et d’apprentissage automatique pour estimer des durées de vie résiduelle. Nous avons mené une étude comparative approfondie sur ces différentes méthodes de prédictions de durée de vie utilisant à la fois des données publiques et des données synthétiques. Les données synthétiques permettent de disposer de la vérité terrain des échantillons censurés, de contrôler le nombre d’échantillons disponibles et le type de censure et du rapport de censure. Il ne s’agit pas ici de proposer une nouvelle solution, mais de repérer les solutions existantes les plus prometteuses et d’analyser l’impact de la censure sur les performances obtenues.

Experimental Comparison of Ensemble Methods and Time-to-Event Analysis Models Through Two Different Scores

ABSTRACT. Time-to-event analysis is a branch of statistic that has increased its popularity during the last decades due to its many different application fields. We present in this paper a comparison between semi-parametric, parametric and machine learning methods through two different scores that were applied to three datasets. We also present an aggregation method for time-to-event analysis that in average outperforms the best predictor. Finally we present simulation results in order to enrich the comparison between the two scores while varying the number of samples and the censored data percentage.

16:15-17:45 Session 14F: Forecasting Customer Load
Location: A1.122
VIKING: Variational Bayesian Variance Tracking Winning an Electricity Load Forecasting Competition

ABSTRACT. We consider the problem of online time series forecasting, motivated by the prediction of the electricity load. Indeed, forecasting the electricity demand is a crucial task for grid operators and has been extensively studied by the time series community as well as the statistics and machine learning communities. As the behaviour of the consumption evolves in time it is necessary to develop time-varying forecasting models, that is, models that improve themselves sequentially taking into account the most recent observations. Our methods resulted in the winning strategy in a competition on post-covid electricity load forecasting, and we use it as an illustration for our theoretical framework.

First, we present the data set, as well as classical time-invariant forecasting models: auto-regressive, linear regression, generalized additive model, and multi-layer perceptron.

Second, we introduce a state-space representation to obtain adaptive forecasting methods. We start by using this state-space representation with fixed state and space noise variances, in which setting we use the celebrated Kalman filter. We see it as a second-order stochastic gradient descent algorithm on the quadratic loss.

Finally, we introduce a novel approach, named VIKING, to deal with state-space model inference under unknown state and space noise variances. An augmented dynamical model is considered where the variances are represented as latent variables. The inference relies on the variational bayesian approach. We extend the gradient descent parallel to see VIKING as a way to adapt the step size in a second-order stochastic gradient descent.

Hyperparameter optimization of deep neural networks: Application to the forecasting of energy consumption

ABSTRACT. This project introduces a step-by-step methodology for the Hyperparameter Optimization problem (HPO) of a deep convolutional neural network, applied to the forecasting of energy consumption in France.

The problem is characterized by an expensive, noisy and black box function with a large scale and mixed search space. To tackle HPO we used exploration and exploitation strategies, surrogate based optimization, chaotic optimization and fractal decomposition algorithms. Finally we decided to apply ensemble methods and online aggregation on best solutions found during the optimization phase. Results of our methodology are promising, indeed by automating HPO problem we found better solutions and meta-models than the initial model.

16:15-17:45 Session 14G: Traveling salesman problem
Location: A1.134
MIP formulations for OWA Traveling Salesman Problems

ABSTRACT. We study a variant of the traveling salesman problem (TSP) where both the total cost of a tour and some balance (fairness) for its edge costs are considered. The main challenge is that the sum aggregation function for the standard version of TSP can not guarantee any degree of balance for the costs of the edges composing the tour. In this paper, using the Generalized Gini Index (GGI), a special case of the ordered weighted averaging (OWA) function, we define a variant of TSP - the OWA TSP where the efficiency and the fairness for edge costs of the solution tour are achievable by minimizing the objective function. Although GGI is a non-linear aggregation function, it can be cast to Mixed-Integer Programs (MIP) by several existing linearization methods [1, 3]. The focus of our work is to exploit the representations of Hamiltonian cycles to reduce the size of MIP formulations for linearizing the OWA TSP and to improve their computational efficiency. As illustrated in this work, our proposal achieves a great reduction in terms of problem size and running time. In addition, we employ the Lagrangian relaxation to design a fast heuristic algorithm to deal with the OWA TSP as done in [2]. The numerical results demonstrate that our method provides rapidly high-quality lower bounds, which might take a lot of time for commercial solvers (e.g. CPLEX, Gurobi). Furthermore, obtained quasi-optimal solutions have optimality gaps less than 1% compared to the known exact solutions.

Nash Fairness solution for balanced TSP

ABSTRACT. In this paper, we consider a variant of the Traveling Salesman Problem (TSP), called Balanced Traveling Salesman Problem (BTSP). The BTSP seeks to find a tour which has the smallest max-min distance : the difference between the maximum edge cost and the minimum one. We present a Mixed Integer Program (MIP) to find optimal solutions minimizing the max-min distance for BTSP. However, minimizing only the max-min distance may lead to a tour with inefficient total cost in many situations. Hence, we propose a fair way based on Nash equilibrium to inject the total cost into the objective function of the BTSP. We consider a Nash equilibrium as it is defined in a context of fair competition based on proportional-fair scheduling. In the context of TSP, we are interested in solutions achieving a Nash equilibrium between two players: the first aims at minimizing the total cost and the second aims at minimizing the max-min distance. We call such solutions Nash Fairness (NF) solutions. We first show that NF solutions for TSP exists and may be more than one. We then focus on the unique NF solution, called Efficient Nash Fairness (ENF) solution, having the smallest value of total cost. We show that the ENF solution is Pareto-optimal and can be found by optimizing a suitable linear combination of the two players objectives based on Weighted Sum Method. Finally, we propose an iterative algorithm which converges to the ENF solution in a finite number of iterations. Computational results will be presented and commented.

16:15-17:45 Session 14H: Evolutionary and black-box algorithms, II
Location: A1.139
Parameter-Dependent Performance Bounds for Evolutionary Algorithms

ABSTRACT. Evolutionary algorithms (EAs) are optimization heuristics that generally yield good solutions while requiring comparably few resources, such as time, computing power, or knowledge about the optimization problem. Thanks to their great importance in practice, EAs are the focus of an extensive line of research of empirical and theoretical nature.

Unfortunately, up to date, the discrepancy between empirical and theoretical results is rather large. A major underlying reason is that theory mostly considers the performance of EAs with respect to a single parameter, whereas modern applied EAs typically have various parameters impacting their performance at the same time in different ways. However, the focus of the theory community is about to shift. A few theoretical results that consider multiple parameters mark the beginning of a phase transition from a single- to a multiple-parameter analysis.

In this talk, we look at this challenging and exciting phase transition. Our journey starts with the introduction of the EA framework and typical algorithms that are considered in theoretical analyses. We then look at some results for the analysis of a single parameter, and we learn about the methods used in order to derive such results. At the end, we see how these tools can be leveraged to get results that show how the performance of an EA is affected by multiple parameters at once.

Towards Trajectory-Based Algorithm Selection

ABSTRACT. Black-box optimization problems require evaluations to assess and to compare the quality of different alternatives. Classical examples for such problems are settings in which the evaluation of a suggested solution requires a computer simulation or a physical experiment, such as a crash test in automotive industry or a clinical study in medicine. Black-box optimization problems can only be solved by sampling-based heuristics, i.e., algorithms which (deterministically or randomly) generate solution candidates, evaluate them, and use the soobtained knowledge to recommend one or more alternatives. Even though the basic underlying principles of these algorithms can be considered similar in nature, their performances on different problem instances in practice can greatly vary. This poses a meta-problem: selecting the most appropriate algorithm when presented with a new, unknown problem instance. In recent years, significant research effort has been put towards automating the algorithm selection process by building and training machine learning models to perform this task. To this end, knowledge about the problem instance has to be extracted beforehand, independently of the optimization process, in order to then be mapped to the performance of different algorithms on the very same instance. This talk will introduce a novel, trajectory-based approach, in which the cost of extracting the problem knowledge is balanced out by using samples that some default solver sees anyway during its run (or a part of its run) to identify the problem instance, and recommend a more appropriate algorithm to switch to. We show that this approach allows for results as reliable as any of the state-of-the-art algorithm selection techniques. This brings the community one step closer to an online algorithm selection model that would be able to track and adapt which algorithm it is using during the optimization process itself, which may result in significant gains in performance, while keeping the computation costs low.

Multistage Optimization Algorithm using Chaotic Search

ABSTRACT. The chaotic optimization method gets increasing attention as an engineering application of the chaotic dynamical systems. In this poster we undertake a performance analysis for a new class of evolutionary algorithms called chaos optimization algorithm (COA), recently proposed by Caponetto and al. [1], [2], [3]. It was originally proposed to solve nonlinear optimization problems with bounded variables. Different chaotic mapping have been considered, combined with several working strategy. In this work, a chaotic strategy is proposed based on a two-dimensional discrete chaotic attractor. Experiments results showed that the proposed algorithm can achieve good performance.