SMAI-MODE 2020: JOURNéES SMAI-MODE 2020
PROGRAM FOR WEDNESDAY, SEPTEMBER 9TH
Days:
previous day
all days

View: session overviewtalk overview

09:00-10:00 Session WP1: Exposé plénier/Plenary talk
Chairs:
Clémence Alasseur (EDF R&D, France)
Vincent Leclere (ENPC, France)
Location: Amphitheater
09:00
Antonin Chambolle (CMAP, CNRS, École polytechnique, IPP, France)
How to discretize the total variation?

ABSTRACT. This talk will address standard and less standard approaches for discretizing the total variation, widely used as a regularizer for solving inverse problems in imaging. It is used for recovering functions  with discontinuities (edges) and therefore standard numerical analysis,  usually based on the smoothness of the solutions, yields sub-optimal  error bounds for such problems. The talk will address some workarounds, discussing primal and dual discretizations, isotropy issues, error  bounds. (Based on joint works with C. Caillaud and T. Pock.)

10:00-10:30Pause/ Break
10:30-12:30 Session WM1: Optimal control
Chairs:
Terence Bayen (University of Avignon, France)
Hasnaa Zidani (ENSTA Paris, France)
Location: Room 1
10:30
Jean-Baptiste Caillau (Université Côte d'Azur, CNRS, Inria, LJAD, France)
Maximisation de déterminants de Sturm-Liouville

ABSTRACT. On recherche des potentiels qui maximisent le déterminant fonctionnel d'un opérateur de Sturm-Liouville du type Schrödinger sur un intervalle borné, avec conditions aux limites de Dirichlet. L'optimisation est faite sous contrainte $L^q$. On étend pour cela la définition du déterminant fonctionnel au cas de potentiels $L^1$, et on montre que le problème de maximisation est équivalent à un problème de contrôle optimal. On prouve l'existence et l'unicité de solution pour tout $q \geq 1$, et on donne une caractérisation des potentiels maximisants dans les cas $q=1,2$.

11:00
Gaurav Dhar (Université de Limoges, France)
Pontryagin maximum principle for optimal sampled-data control problems

ABSTRACT. Established in the 1950s by Pontryagin et al., the Pontryagin maximum principle (in short, PMP) is a fundamental result in optimal control theory. It gives first-order necessary conditions for the optimal control among all permanent controls, that is to say when the value of the control can be changed at any moment in time. However, in practice the optimal permanent control of a given system is in general unrealizable. For this reason sampled-data controls, for which the value of the control can only be modified a finite number of times, are often implemented in Engineering and Automation. Recently Bourdin and Trélat have obtained in [1] a version of the PMP for optimal sampled-data control problems with fixed sampling times.

The work [1] does not take into account the possibility of free sampling times, where one is al- lowed to optimize the sampling times when the control can be modified in addition to the control values. We present a new version of the PMP for optimal sampled-data control problems with free sampling times where the additional necessary condition for the optimal sampling times coincides with the continuity of the Hamiltonian function (see [2]). Recall that this property is a well-known fact for optimal permanent controls. Therefore, our result asserts that the continuity property is recovered in the case of optimal sampled-data controls with optimal sampling times and it can be used, for example in shooting methods, to determine the optimal sampling times.

Finally we give a new version of the PMP for optimal sampled-data control problems with running state constraints (see [3]). The statement of the PMP is more involved since the adjoint vector is now described as a function of bounded variations. However, in the case of sampled-data controls, we find that, under certain general hypotheses, the optimal trajectory only contacts the running inequality state constraints at most at the sampling times also known as the bouncing trajectory phenomenon. Thus the adjoint vector only experiences jumps at most at the sampling times (and thus in a finite number and at precise instants) and its singular part vanishes.

11:30
Boris Wembe (IRIT-UPS Toulouse, France)
Abnormal Trajectory in Optimal Control with Application inHydrodynamics and Geometry

ABSTRACT. Dans ce travail, nous discutons entre autre du rôle des anormales minimisantes dans la synthèse temps minimale du 2-vortex problème dans le plan, qui peut être vue comme un problème de navigation de Zermelo avec la particularité d'un courant fort au voisinage du vortex et d'une singularité en ce dernier. On démontre la non continuité de la fonction valeur le long des anormales minimisantes (qui existent lorsque le point initial se trouve dans une région où le courant est fort) et partant du cadre Finslerien, on établit un résultat de caractérisation du lieu de coupure nécessaire pour construire la synthèse. Cette étude est le point de départ pour analyser le cas de plusieurs tourbillon qui, dans le cadre Hamiltonien, a des applications en hydrodynamique et dans le problème aux N-corps.

10:30-12:30 Session WM2: Multistage Stochastic Optimization
Chairs:
Clémence Alasseur (EDF R&D, France)
Michel De Lara (Ecole des Ponts ParisTech, CERMICS, France)
Location: Room 2
10:30
Thomas Bittar (EDF / Ecole des Ponts ParisTech, France)
Pierre Carpentier (ENSTA Paris, France)
Jean-Philippe Chancelier (CERMICS, France)
Jerome Lonchampt (EDF, France)
A Decomposition Method by Component for the Optimization of Maintenance Scheduling
PRESENTER: Thomas Bittar

ABSTRACT. In order to improve the reliability and performance of its hydroelectric fleet, EDF (Électricité De France) seeks to optimize the maintenance scheduling of the components of hydroelectric power plants. In this work, we address an idealized maintenance scheduling optimization problem. We consider components of hydroelectric power plants such as turbines, transformers or generators and we study an idealized system of a given type of components that share a common stock of spare parts. Over time components experience random failures that occur according to known failure distributions. Thus the dynamics of the system is stochastic. The goal is to find a deterministic preventive maintenance strategy that minimizes the expected cost depending on maintenance and on the occurrences of forced outages of the system. The numerical experiments should involve systems constituted of up to 80 components thus leading to high-dimensional optimization problems. To overcome the curse of dimensionality, we use the Interaction Prediction Principle to decompose the original optimization problem into a sequence of independent subproblems of smaller dimension. Each subproblem consists in optimizing the maintenance on a single component. The resulting algorithm iteratively solves the subproblems with the blackbox algorithm MADS and coordinates the components. The maintenance optimization problem is a mixed-integer problem. However decomposition methods are based on variational techniques so they are not suited for this kind of problems. It is therefore needed to relax the dynamics of the system as well as the cost functions in the formulation of the maintenance optimization problem. The relaxation parameters have an important influence on the output of the optimization by decomposition and must be appropriately chosen. We apply the decomposition method on relaxed systems with up to 80 components. The more demanding case takes around 20 hours of computation time. We show that in high dimension the decomposition outperforms the blackbox algorithm applied directly on the original problem.

11:00
Marianne Akian (INRIA & CMAP, France)
Jean-Philippe Chancelier (Ecole des Ponts ParisTech, France)
Benoît Tran (Ecole des Ponts ParisTech, France)
Tropical dynamic programming for stochastic optimal control
PRESENTER: Benoît Tran

ABSTRACT. We present an algorithm which builds upper or lower approximations of the (Bellman) value functions of a risk-neutral stochastic optimal control problem in discrete time, with white noises of finite supports.

Upper (resp. Lower) approximations of a given value function are built as min-plus linear (resp. max-plus linear) combinations of "basic functions". At each iteration, we add a new, randomly selected, basic functions to the current combination. We give sufficient conditions to ensure asymptotic almost sure convergence of the generated approximating functions to the value functions on sets of interest.

We illustrate this algorithm numerically on optimal control problems with linear dynamics and polyhedral costs.

11:30
Adrien Seguret (EDF R&D, France)
Nadia Oudjane (EDF R&D, France)
Frédéric Bonnans (CMAP Inria, France)
Clémence Alasseur (EDF R&D, France)
Antonio De Paola (Bath University, UK)
Vincenzo Trovato (Imperial College London, UK)
Decomposition of a high dimensional stochastic optimization problem and application to flexible devices in the context of Smart Grids
PRESENTER: Adrien Seguret

ABSTRACT. In the power sector, many smart and flexible devices are currently appearing at the customer side. The issue on how to manage all those devices in an efficient way is an important and difficult question especially because of the high number of devices and the stochastic nature of the problem. Our work aims at providing a method and an algorithm to solve this complex problem as distributed control problems. We also applied the proposed algorithm to an Unit Commitment problem.

12:00
Maël Forcier (Ecole des Ponts, France)
Stéphane Gaubert (Ecole polytechnique, France)
Vincent Leclère (Ecole des Ponts, France)
The polyhedral structure and complexity of multistage stochastic linear programming with general cost distribution
PRESENTER: Maël Forcier

ABSTRACT. By studying the intrinsic polyhedral structure of multistage stochastic lin-ear problems (MSLP), we show that a MSLP with an arbitrary cost distribution isequivalent to a MSLP on a finite scenario tree. More precisely, we show that theexpected cost-to-go function, at a given stage, is affine on each cell of a chambercomplex i.e., on the common refinement of the complexes obtained by projecting thefaces of a polyhedron. This chamber complex is independent of the cost distribution.Furthermore, we examine several important special cases of random cost distribu-tions, exponential on a polyhedral cone, or uniform on a polytope, and obtain an ex-plicit description of the supporting hyperplanes of the cost-to-go function, in terms ofcertain valuations attached to the cones of a normal fan. This leads to fixed-parametertractability results, showing that MSLP can be solved in polynomial time when thenumber of stages together with certain characteristic dimensions are fixed.

10:30-12:30 Session WM3: Thematic session: Games
Chairs:
Marianne Akian (INRIA and CMAP, CNRS, Ecole polytechnique, IPP, France)
Guillaume Vigeral (Université Paris-Dauphine, France)
Location: Room 3
10:30
Guillaume Vigeral (Université Paris-Dauphine, France)
Paiement d'équilibres algébriques de jeux finis à paiement entier.

ABSTRACT. Tout jeu (statique) fini à paiement entier a, par application du théorème de Tarski Seidenberg, au moins un paiement d'équilibre dont toutes les composantes sont des nombres algébriques. On montre la réciproque suivante : si N est supérieur ou égal à 3 et e est un N-uplet de nombres algébrique, il existe un jeu fini à paiement entier et à N joueurs ayant un unique équilibre de Nash, de paiement e.

Une conséquence est que, pour N supérieur ou égal à 3, tout ensemble compact non vide de $R^N$ défini par un nombre fini d'union et intersection d'inégalités polynomiales à coefficients entiers est l'ensemble des paiements d'équilibre d'un jeu à N joueurs et à paiement entier.

On montre également des résultats similaires concernant les équilibres plutôt que les paiements d'équilibres.

11:00
Rida Laraki (CNRS (Dauphine-PSL) and University of Liverpool, France)
Srihari Govindan (University of Rochester, United States)
Lucas Pahl (University of New South Wales, Australia)
On Sustainable Equilibria
PRESENTER: Rida Laraki

ABSTRACT. Following the ideas laid out in Myerson (1996), Hofbauer (2000) defined an equilibrium of a game as sustainable if it can be made the unique equilibrium of a game obtained by deleting some (or all) of the strategies that are inferior replies to it, and then adding others. Hofbauer also formalized Myerson's conjecture about the relationship between the sustainability of an equilibrium and its index: for generic games, an equilibrium is sustainable iff its index is +1. von Schemde and von Stengel proved this conjecture for bimatrix games. This paper shows that the conjecture is true for all finite games.

11:30
Frédéric Koessler (CNRS, PSE, France)
Marie Laclau (CNRS, HEC Paris, France)
Jérôme Renault (Université Toulouse 1, TSE, France)
Tristan Tomala (HEC Paris, France)
Long Information Design
PRESENTER: Tristan Tomala

ABSTRACT. We analyze zero-sum games between two designers who provide information to a decision maker (DM). Before the DM takes its decision, designers repeatedly disclose information about fixed state parameters. Designers face admissibility constraints on distributions of posterior beliefs they can induce. Our main results characterize equilibrium payoffs and strategies for various timings of the game: simultaneous or alternating disclosures, with or without deadline.

We define concave and convex closure of functions of beliefs along correspondences of admissible distributions of posterior. We show that the value of the game without deadline is the unique solution v of a Mertens-Zamir system of equations

v(p,q)=cav min(u,v)(p,q)=vex max(u,v)(p,q)

where u(p,q) is the realized payoff as function of the beliefs (p,q) of the DM about the state parameters, cav is the concave closure operator with respect to p along the admissibility correspondence, vex is the convex closure operator with respect to q along the admissibility correspondence.

We also show that in equilibrium, information is disclosed in a single stage when all distributions of posteriors are admissible. With constraints, there may be no bound on the number of stages used to disclose information in equilibrium.

12:30-14:00Repas/Lunch Break
14:00-15:30 Session WA1: Thematic session: Mean Field Games
Chairs:
Clémence Alasseur (EDF R&D, France)
Hasnaa Zidani (ENSTA Paris, France)
Location: Room 1
14:00
Alekos Cecchin (Université Côte d'Azur, France)
François Delarue (Université Côte d'Azur, France)
Finite State Mean Field Games: Common Noise and Selection of Equilibira

ABSTRACT. We address restoration of uniqueness for finite state MFGs by using a relevant form of common noise and then show, in the potential case, how this may help for selecting solutions to the corresponding MFG without common noise.

14:30
Julien Claisse (Université Paris-Dauphine, France)
Mean Field Games with Branching

ABSTRACT. Mean field games are concerned with the limit of large-population stochastic differential games where the agents interact through their empirical distribution. In the classical setting, the number of players is large but fixed throughout the game. However, in various applications, such as population dynamics or economic growth, the number of players can vary across time which may lead to different Nash equilibria. For this reason, we introduce a branching mechanism in the population of agents and obtain a variation on the mean field game problem. As a first step, we study a simple model using a PDE approach to illustrate the main differences with the classical setting. Then we study the problem in a general setting by a probabilistic approach, based upon the relaxed formulation of stochastic control problems. This is a joint work with Zhenjie Ren and Xiaolu Tan.

15:00
Clément Sarrazin (Institute of Mathematics of Orsay, France)
A lagrangian discretization for mean field games with congestion

ABSTRACT. Variationnal mean field games are differential games where an infinite number of players minimize a global energy accumulated during their movement inside a domain. Although these games usually arise from finite differential games as the number of players go to infinity, one can also consider terms in the energy depending on the density of the population (and have therefore no meaning for a finite population). These terms usually force players to move away from "congested zones", where the density is too high. Optimal strategies for these mean field games are often numerically found in a "eulerian way", by approximating the optimal densities at a finite number of time steps, and propagating from time step to time step, which necessitates an ad-hoc time discretization of the transport equation verified by the mean field game, and can be challenging. The approach I would like to talk about is a more "Lagrangian" one, where one approximate the behaviour of the population with the strategies of a finite number of players. The main hinderance here is the term penalizing congestion, which is only finite when the population admits densities at all time. A way to circumvent this problem is to replace it with one essentially penalizing the distance to the set of measures not congested. Time discretization is simply done by considering only piecewise affine movements for the players. I would give a definition for this term in a general case, using a Moreau-Yosida approximation for the Wasserstein distance. It can be numerically computed, by solving a smooth concave problem, with computations very reminiscent of the ones for semi-discrete optimal transport . Such a term defines an energy, finite at discrete measures, and the optimal, discrete, strategies for this new game converge in some sense to optimal strategies for the initial, continuous, mean field game. I would also show a more "observable" convergence for the Moreau-Yosida approximations and some numerical simulations to go with it.

15:30
Bonnans Frédéric (CMAP, France)
Lavigne Pierre (CMAP, France)
Laurent Pfeiffer (CMAP, France)
Discrete-time MFGs with risk-averse agents
PRESENTER: Lavigne Pierre

ABSTRACT. Classical mean field game theory focusses on risk neutral agents. However in many economics applications it is very natural to consider risk aversion. Risk sensitive case and worst case have been studied by T. Başar and al. in recent works. Our approach is different, we use the theory of risk measure developed by Artzner and al. to build an economic framework in which agents interacts through the prices under risk aversion. We study a discrete time, continuous space system of dynamic programming and Kolmogorov equations. One main feature of the dynamic programming equation is the presence of a risk measure. This equation can be derived from a general cost designed with a composite risk measure. The study of the dynamic programming equation is performed via the theory of proximal operators. We show the existence of an equilibrium of the game via a classical fixed point technique, under suitable assumptions.

14:00-16:00 Session WA2: Thematic session: Geometry and Optimization
Chairs:
Xavier Allamigeon (Inria and CMAP, France)
Sourour Elloumi (ENSTA, France)
Location: Room 2
14:00
Nikolas Stott (LocalSolver, France)
Calcul de bornes dans LocalSolver pour des problèmes non-linéaires mixtes et multi-objectifs

ABSTRACT. Contexte

LocalSolver est un solveur d'optimisation mathématique tout terrain qui permet de résoudre des problèmes multi-objectifs modélisés avec les variables classiques de la programmation linéaire en nombres entiers (booléens, entiers et réels), ainsi que des variables de type collection (listes et ensembles) pour modéliser par exemple des problèmes de tournées de véhicules et de rangement.

La recherche de solution repose sur le coeur historique de méthodes heuristiques, en utilisant des techniques de recherche locale, et sur un module d'optimisation globale pour traiter les problèmes de type MINLP.

L'objectif de LocalSolver est de produire des solutions de qualité rapidement sur tout l'éventail des problèmes modélisables. Ces solutions sont ensuite raffinées. Afin d'attester de la qualité des solutions obtenues, LocalSolver calcule des bornes inférieures (resp. supérieures) pour chacun des objectifs minimisés (resp. maximisés), et la même stratégie est adoptée: obtenir de premières bornes de qualité rapidement, puis les améliorer itérativement.

Techniques de calcul de bornes

LocalSolver représente un problème d'optimisation sous la forme d'un graphe dirigé acyclique. Les bornes associées à chaque noeud du graphe peuvent être déduites des bornes de ses enfants, ou inférées à partir des bornes de ses parents. Cette arithmétique d'intervalle permet d'obtenir des premières bornes sur tous les objectifs dès le pré-processing du modèle.

Dans le cas d'un seul objectif, le module d'optimisation globale reformule le problème et produit des bornes valides au cours de la résolution à l'aide de relaxations linéaires/convexes couplées avec un mécanisme de branch-and-reduce. Si le problème est purement linéaire, il est résolu directement par le solveur linéaire sous-jacent.

Dans le cas multi-objectif, le calcul des bornes traite les objectifs un par un. Chaque objectif doit être résolu jusqu'à l'optimum global en contraignant chacun des objectifs précédents à leur valeur optimale avant de traiter le suivant.

Le sous-solveur MINLP permet également d'obtenir des solutions et des bornes lorsque les variables de décisions sont des ensembles, par linéarisation booléenne exacte de la structure ensemblistes et des contraintes associées.

Les problèmes de tournées sont modélisés avec des listes dans LocalSolver. Le calcul de bornes utilisant seulement des arguments de propagation obtient des bornes trop faibles, et il est donc traité avec une technique dédiée: la résolution de la relaxation lagrangienne de Held-Karp pour les TSP symétriques et asymétriques.

Résultats

LocalSolver trouve une solution optimale à 40% des problèmes de la MINLPlib et des solutions faisables à 70% d'entre eux. Nous illustrons également un problème industriel, sur lequel la résolution à l'optimum des premiers objectifs entraîne une amélioration significative sur la résolution des objectifs suivants.

14:30
Simon Boulmier (LocalSolver, France)
Unconstrained relaxations and duality in global optimization

ABSTRACT. The global optimization of mixed-integer nonlinear programs is an ambitious goal, with many practical applications. Here, we consider the broadest class of such problems, where no convexity or regularity assumptions are made. We only require that an expression graph representing the objective and constraints be given.

The most common approach to solve these generic problems is the reformulation convexification technique. It uses a symbolic reformulation to express the problem in a standard form, such as the one introduced in [1], that facilitates the implementation of various tools like convex relaxations and bound tightening techniques. Some of the most successful global solvers [2,3] follow this approach and make use of decades of engineering in mixed-integer linear programming for robustness and efficiency.

Since the work of [4], we know that the hybridation with nonlinear programming can lead to important improvements. However, linear relaxations still rule in most global optimization solvers and remain the default setting. To show that nonlinear relaxations can be competitive, we implemented a global optimization solver that can use both linear and nonlinear relaxations. This presentation will be focused on the use of unconstrained optimization in this context, a key idea that substantially improved both the performance and robustness of the solver.

15:00
Daniel Dadush (CWI Amsterdam, Netherlands)
Sophie Huiberts (CWI Amsterdam, Netherlands)
Bento Natura (London School of Economics, UK)
László A. Végh (London School of Economics, UK)
A scaling-invariant algorithm for linear programming whose running time depends only on the constraint matrix
PRESENTER: Bento Natura

ABSTRACT. Following the breakthrough work of Tardos (Oper. Res. '86) in the bit-complexity model, Vavasis and Ye (Math. Prog. '96) gave the first exact algorithm for linear programming in the real model of computation with running time depending only on the constraint matrix. For solving a linear program (LP) $\max\, c^\top x,\: Ax = b,\: x \geq 0,\: A \in \mathbb{R}^{m \times n}$, Vavasis and Ye developed a primal-dual interior point method using a \emph{`layered least squares'} (LLS) step, and showed that $O(n^{3.5} \log (\bar{\chi}_A+n))$ iterations suffice to solve (LP) exactly, where $\bar{\chi}_A$ is a condition measure controlling the size of solutions to linear systems related to $A$.

Monteiro and Tsuchiya (SIAM J. Optim. '03), noting that the central path is invariant under rescalings of the columns of $A$ and $c$, asked whether there exists an LP algorithm depending instead on the measure $\bar{\chi}^*_A$, defined as the minimum $\bar{\chi}_{AD}$ value achievable by a column rescaling $AD$ of $A$, and gave strong evidence that this should be the case. We resolve this open question affirmatively.

Our first main contribution is an $O(m^2 n^2 + n^3)$ time algorithm which works on the linear matroid of $A$ to compute a nearly optimal diagonal rescaling $D$ satisfying $\bar{\chi}_{AD} \leq n(\bar{\chi}^*)^3$. This algorithm also allows us to approximate the value of $\bar{\chi}_A$ up to a factor $n (\bar{\chi}^*)^2$. This result is in (surprising) contrast to that of Tun{\c{c}}el (Math. Prog. '99), who showed NP-hardness for approximating $\bar{\chi}_A$ to within $2^{{\rm poly}({\rm rank}(A))}$. The key insight for our algorithm is to work with ratios $g_i/g_j$ of circuits of $A$---i.e.,~minimal linear dependencies $Ag=0$---which allow us to approximate the value of $\bar{\chi}_A^*$ by a maximum geometric mean cycle computation in what we call the \emph{`circuit ratio digraph'} of $A$.

In our second main contribution we develop a \emph{scaling invariant} LLS algorithm, which uses and dynamically maintains improving estimates of the circuit ratio digraph, together with a refined potential function based analysis for LLS algorithms in general. With this analysis, we derive an improved $O(n^{2.5} \log n\log (\bar{\chi}^*_A+n))$ iteration bound for optimally solving (LP) using our algorithm. The same argument also yields a factor $n/\log n$ improvement on the iteration complexity bound of the original Vavasis-Ye algorithm.

15:30
Antoine Deza (McMaster University, Canada)
Algorithmic, Combinatorial, and Geometric Aspects of Linear Optimization

ABSTRACT. The simplex and interior point methods are currently the most computationally successful algorithms for linear optimization. While the simplex methods follow an edge path, the interior point methods follow the central path. The algorithmic issues are closely related to the combinatorial and geometric structure of the feasible region. Focusing on the analysis of worst-case constructions leading to computationally challenging instances, we discuss connections to the largest diameter of lattice polytopes, to the complexity of convex matroid optimization, and to the number of generalized retarded functions in quantum field theory. Complexity results and open questions are also presented. In particular, we answer a question raised in 1986 by Colbourn, Kocay, and Stinson by showing that deciding whether a given sequence is the degree sequence of a 3-hypergraph is computationally prohibitive.

14:00-15:30 Session WA3: Optimization and Modeling
Chairs:
Pierre Carpentier (ENSTA, France)
Panayotis Mertikopoulos (IMAG CNRS, France)
Location: Room 3
14:00
Thomas Martin (CERMICS, Ecole des Ponts ParisTech, France)
A dynamic formulation for the crude oil procurement problem

ABSTRACT. Crude oil procurement is subject to variations in shipping delays and random costs. The optimal procurement problem aims at buying oil for refineries so as to maximize net revenue. The plants owned by a company have different specifications, and so have the crude oil available on the market. In our setting of the problem, we focus on the stochasticity of the costs of oil, shipping and of the prices of the products. The dynamic of the system over time, the shipping delays, and the running of a refinery are supposed to be deterministic.

In this presentation, we will first focus on identifying controls and states variables to build a suitable model for the crude oil procurement problem. The resulting problem is a multistage stochastic optimization problem, which, due to its huge size, is impossible to solve as such. We then discuss the model and explain why the problem can be considered as a multi-stopping time problem. Finally, we take advantage of the peculiar structure of the problem to reduce its size and discuss possible resolution techniques. 

14:30
Adrien Le Franc (Cermics ENPC, France)
Pierre Carpentier (ENSTA Paris, France)
Jean-Philippe Chancelier (CERMICS, France)
Michel De Lara (Ecole des Ponts ParisTech, CERMICS, France)
EMSx: an Energy Management System numerical benchmark
PRESENTER: Adrien Le Franc

ABSTRACT. Inserting renewable energy in the electric grid in a decentralized manner is a key challenge of the energy transition. However, at local stage, both production and demand display erratic behavior, be it for the dependence upon local weather conditions. Thus, the match between supply and demand is delicate to achieve in such a stochastic context. It is the goal of Energy Management Systems (EMS) to achieve such balance at least cost. We present EMSx, a numerical benchmark for testing control algorithms for the management of electric microgrids doted with a photovoltaic unit and an energy storage system. Our benchmark is based on a rich collection of historical observations and forecasts collected by the company Schneider Electric. Besides the dataset, we also publicly release EMSx.jl, a package implemented in the Julia language which enables to easily implement a microgrid controller and to test it on the EMSx benchmark. Eventually, we showcase the results of standard microgrid control methods, including Model Predictive Control, Open Loop Feedback Control and Stochastic Dynamic Programming.

15:00
Quentin Bertrand (Université Paris-Saclay, Inria, Cea, France)
Quentin Klopfenstein (Université de Bourgogne, France)
Joseph Salmon (Université de Montpellier, France)
Mathieu Blondel (Google, France)
Samuel Vaiter (CNRS, France)
Alexandre Gramfort (Université Paris-Saclay, Inria, Cea, France)
Hyperparameter selection for the Lasso with gradient descent
PRESENTER: Quentin Bertrand

ABSTRACT. Setting regularization parameters for Lasso type estimators is notoriously hard and crucial for model performance. Classical hyperparameter setting techniques include gridsearch on cross-validation datasets. In this work we propose a gradient based hyperparameter selection algorithm for the Lasso and the adaptive Lasso using implicit differentiation. Contrary to classical gridsearch techniques, the gradient-based proposed method does not require to choose a predefined grid and can scale to multiple hyperparameters selection.

16:00-17:00Pause/ Break
17:00-17:30 Prix Dodu/Dodu Prize
Chairs:
Jerome Bolte (Universite Toulouse 1 Capitole, France)
Stephane Gaubert (Inria & CMAP, France)
Filippo Santambrogio (Université Claude Bernard - Lyon 1, France)
Location: Amphitheater