View: session overviewtalk overview

09:00 | Optimal Distributed Energy Resource Coordination: A Hierarchical Decomposition Method Based on Distribution Locational Marginal Costs ABSTRACT. In this work, we consider the day-ahead operational planning problem of a radial distribution network hosting Distributed Energy Resources (DERs) including rooftop solar and storage-like loads, such as electric vehicles. We present a novel hierarchical decomposition method that is based on a centralized AC Optimal Power Flow (AC OPF) problem interacting iteratively with self-dispatching DER problems adapting to real and reactive power Distribution Locational Marginal Costs. We illustrate the applicability and tractability of the proposed method on an actual distribution feeder, while modeling the full complexity of spatiotemporal DER capabilities and preferences, and accounting for instances of non-exact AC OPF convex relaxations. We show that the proposed method achieves optimal Grid-DER coordination, by successively improving feasible AC OPF solutions, and discovers spatiotemporally varying marginal costs in distribution networks that are key to optimal DER scheduling by modeling losses, ampacity and voltage congestion, and, most importantly, dynamic asset degradation. |

09:30 | Model Predictive Control Framework for Congestion Management with Large Batteries in Sub-Transmission Grid ABSTRACT. RTE is building and will put into operation 3 large battery storage systems in 2021/2022 (10MW/20MWh). These batteries, together with intermittent renewable generation curtailment and line switching, will be used to manage congestions in 3 small subtransmission zones (63kV or 90kV). A local controller sends orders to the battery, to power plants and switches every 15 seconds, using all the Flexibility offered by permanent and emergency ratings, including Dynamic Line Rating when available. Controller's decision algorithm is Model Predictive Control: every 15 secondes, DC approximation model of the grid is computed based on latests real time measurements; then a Mixed Integer Programming model is built, taking into account delays of actions. This local controller does not have any forecast and is not be able to manage preventive actions, so a higher level scheduler will be in charge of security analysis (N-1 analysis), battery preventive actions, pre-discharging the battery for forthcoming grid congestions. |

10:00 | Approximate Nash equilibria in large nonconvex aggregative games PRESENTER: Kang Liu ABSTRACT. This paper shows the existence of $\mathcal{O}(\frac{1}{n^\gamma})$-Nash equilibria in $n$-player noncooperative sum-aggregative games where the players' cost functions depend only on their own action and the average of all players' actions, is lower semicontinuous in the former, while $\gamma$-H\"{o}lder continuous in the latter. Neither the action sets nor the cost functions need to be convex. For an important class of sum-aggregative games which includes congestion games with $\gamma$ being 1, a gradient-proximal algorithm is used to construct an $\mathcal{O}(\frac{1}{n})$-Nash equilibria with at most $\mathcal{O}(n^3)$ iterations. These results are applied to a numerical example of demand-side management of the electricity system. The asymptotic performance of the algorithm is illustrated when $n$ tends to infinity. |

11:00 | A distributed algorithm for economic dispatch of prosumer networks with grid constraints ABSTRACT. Future electrical distribution grids are envisioned as complex large-scale networks of prosumers (energy consumers with storage and/or production capabilities). In the context of economic dispatch, these prosumers optimize their operating points based on self-interested objectives. Nevertheless, these decisions must be accepted by a network operator, whose responsibility is to ensure a safe operation of the grid. In this work, we formulate the interaction among prosumers plus the network operator in an economic dispatch problem as a generalized aggregative game, i.e., a set of inter-dependent optimization problems, each of which has a cost function and constraints that depend on the aggregated decisions of prosumers. We consider that each prosumer may have dispatchable generation and storage units as well as peer-to-peer and grid trading capabilities. In addition, we also impose a linear approximation of power-flow equations and operational limits as grid constraints handled by the network operator. Furthermore, we design a distributed mechanism with convergence guarantee to an economically efficient and operationally safe configuration (i.e., a variational GNE) based on the preconditioned proximal point method. Our algorithm requires each prosumer to communicate with its trading partners and the network operator that handles grid constraints and aggregative variables. We perform numerical experiments on the IEEE 37-bus system to show the importance of having grid constraints and active participation of prosumers as well as to show the scalability of our algorithm. |

11:30 | On Convex Lower-Level Black-Box Constraints in Bilevel Optimization with an Application to Gas Market Models with Chance Constraints PRESENTER: Martin Schmidt ABSTRACT. Bilevel optimization is an increasingly important tool to model hierarchical decision making. However, the ability of modeling such settings makes bilevel problems hard to solve in theory and practice. In this paper, we add on the general difficulty of this class of problems by further incorporating convex black-box constraints in the lower level. For this setup, we develop a cutting-plane algorithm that computes approximate bilevel-feasible points. We apply this method to a bilevel model of the European gas market in which we use a joint chance constraint to model uncertain loads. Since the chance constraint is not available in closed form, this fits into the black-box setting studied before. For the applied model, we use further problem-specific insights to derive bounds on the objective value of the bilevel problem. By doing so, we are able to show that we solve the application problem to approximate global optimality. In our numerical case study we are thus able to evaluate the welfare sensitivity in dependence of the achieved safety level of uncertain load coverage. |

12:00 | Distribution locational marginal prices via random block-coordinate descent methods ABSTRACT. The modern distribution network is undergoing an unprecedented reformation, thanks to the increased deployment of distributed energy resources (DERs) in the form of distributed generators, distributed storage, aggregators managing fleets of electric vehicles or groups of prosumers (e.g. owners of photo-voltaic panels). While the potential benefits of DERs are globally accepted, ill-managed operations of them could lead to significant failures of the desired outcome. Specifically, wrong control strategies could lead to drastic voltage fluctuations and supply-demand imbalances. With this in mind, a replication of the transmission-level locational marginal prices is much desired. The locational marginal price is the marginal cost of supplying an additional unit of demand at a bus. The price signals differ spatially and temporally, and are used to incentivize DER-generators and consumers to balance supply-demand, support voltage, and minimize system losses. The necessary extension to the distribution network- in the form of distribution locational marginal prices (DLMP)- has received significant attention in the current debate on how to integrate distributed resources into the entire energy system. Distributed Multi-agent optimization techniques are one powerful approach to enable this envisioned transition towards a fully integrated energy system. A key question in this approach is the effective computation of DLMPs in a fully distributed way with minimal communication and computational complexity. In this paper we present a computational framework, derived from a new randomized block-coordinate descent algorithm for linearly constrained optimization problems, generalizing the state-of-the-art. Preliminary numerical studies on a 15-bus test network show promising performance of our randomized block coordinate descent strategy. |

11:00 | Viability and Invariance in a Wasserstein Space ABSTRACT. Optimal control problems stated in the space of Borel probability measures involve controlled non-local continuity equations whose solutions are curves of probability measures. Such equations appear in probabilistic models describing multi-agent systems. I will provide necessary and sufficient conditions for viability and invariance of proper subsets of the Wasserstein space P2 under controlled continuity equations. Viability means that for every initial condition in the set of constraints having a compact support there exists at least one trajectory of control system starting at it and respecting state constraints forever. Invariance means that every trajectory of control system starting in the set of constraints and having a compact support at the initial time never violates them. To describe these conditions I will introduce notions of contingent cone, proximal normal, directional derivative and Hadamard super/subdifferential on the (merely metric) space P2 in which the linear structure is absent. Results are applied to get existence and uniqueness of continuous solutions to a Hamilton-Jacobi-Bellman equation on the space of probability measures with compact support. These solutions are understood in the viscosity-like sense. |

11:30 | Optimal Shape of Stellarators for Magnetic Confinement Fusion PRESENTER: Rémi Robin ABSTRACT. Nuclear fusion is a nuclear reaction involving the use of light nuclei. In order to use nuclear fusion to produce energy, high temperature plasmas must be produced and confined. This is the objective of devices called tokamaks, steel magnetic confinement chambers having the shape of a toroidal solenoid. The magnetic field lines in tokamaks form helicoidal loops, whose twisting allow the particles in the plasma to be confined. Indeed, twisting allows to counter the vertical drift that would otherwise affect particles, in opposite directions for ions and for electrons. Unfortunately, the twisting of magnetic field requires a current to be induced within the plasma, inducing intermittency, instabilities, and other technological challenges. A possible alternative solution to correct the problems of drift of magnetically confined plasma particles in a torus is to modify the toroidal shape of the device, yielding to the concept of stellarator. This system has the advantage of not requiring plasma current and therefore of being able to operate continuously; but it comes at the cost of more delicate coils (non-planar coils) and of a more important transport. Despite the promise of very stable steady-state fusion plasmas, stellarator technology also presents significant challenges related to the complex arrangement of magnetic field coils. These magnetic field coils are particularly expensive and especially difficult to design and fabricate due to the complexity of their spatial arrangement. In this talk, we are interested in the search for the optimal shape of stellarators, i.e., the best coil arrangement (provided that it exists) to confine the plasma. Let us assume that a target magnetic field B_T is known. It is then convenient to define a coil winding surface (CWS) on which the coils will be located. The optimal arrangement of stellarator coils corresponds then to the determination of a surface (the CWS) chosen to guarantee that the magnetic field created by the coils is as close as possible to the target magnetic field B_T. Of course, it is necessary to consider feasibility and manufacturability constraints. We will propose and study several relevant choices of such constraints. In this talk we will analyze the resulting problem as follows: first we will establish the existence of an optimal shape, using the so-called reach condition; then we will show the shape differentiability of the criterion and provide the expression of the differential in a workable form; finally, we will propose a numerical method and perform simulations of optimal stellarator shapes. We will also discuss the efficiency of our approach with respect to the literature in this area. |

12:00 | Inducing strong convergence of trajectories in dynamical systems associated to monotone inclusions with composite structure ABSTRACT. Zeros of the sum of a maximally monotone operator with a single-valued monotone one can be obtained as weak limits of trajectories of dynamical systems, for strong convergence demanding hypotheses being imposed. We extend an approach due to Attouch, Cominetti and coauthors, where zeros of maximally monotone operators are obtained as strong limits of trajectories of Tikhonov regularized dynamical systems, to forward-backward and forward-backward-forward dynamical systems whose trajectories strongly converge towards towards zeros of such sums of monotone operators under reasonable assumptions. This talk is based on joint work with Radu Ioan Bo\c{t}, Dennis Meier and Mathias Staudigl. |

11:00 | Market Integration of Behind-the-Meter Residential Energy Storage PRESENTER: Bárbara Rodrigues ABSTRACT. A new business opportunity beckons with the emergence of prosumers. We propose an innovative business model to harness the potential of aggregating behind-the-meter residential storage in which the aggregator compensates participants for using their storage system on an on-demand basis. A bilevel optimization model is developed to evaluate the potential of this proposed business model and determine the optimal compensation scheme for the participants. A Texas case study using real-world data confirms the year-round profitability of the model, showing that participants could earn on average nearly $1500 per year, and the aggregator could make an average profit of nearly $2000 per participant annually. The case study confirms that the proposed business model has potential, and that the main driver for a successful implementation is a suitable setting of the compensation paid to participants for using their energy storage system. |

11:30 | Intracity Placement of Charging Stations to Maximise Electric Vehicle Adoption ABSTRACT. We present a new model for finding the placement of electric vehicle (EV) charging stations across a multi-year period which maximises EV adoption. This work is an extension of that done by Anjos et. al., and allows for a granular modelling of user characteristics and the value they attribute to the characteristics of the charging stations. This is achieved via the use of discrete choice models, with the users choosing a primary recharging method among the public charging stations or home charging. In case none of the above options are sufficiently attractive for the user, they can select the opt-out option, which indicates they do not purchase an EV. Instead of embedding an analytical probability model in the formulation, we adopt the approach proposed by Paneque et. al., which uses simulation and pre-computes error terms for each option available to users for a given number of scenarios. This results in a bilevel model, with the upper level placing charging stations and the users in the lower level selecting the option which maximises their utility. Alternatively, under the assumption that the charging stations are uncapacitated (in terms of the number of users who can select each option of the discrete choice model), we can use the pre-computed error terms to calculate the users covered by each charging station. This allows for an efficient maximum covering model, which can find optimal or near-optimal solutions in an intracity context much more effectively than the bilevel formulation. |

12:00 | Hierarchical Coupled Driving-and-Charging Model of Electric Vehicles, Stations and Grid Operators ABSTRACT. The decisions of operators from both the transportation and the electrical systems are coupled due to Electric Vehicle (EV) users' actions. Thus, decision-making requires a model of several interdependent operators and of EV users' both driving and charging behaviors. Such a model is suggested for the electrical system in the context of commuting, which has a typical trilevel structure. At the lower level of the model, a congestion game between different types of vehicles gives which driving paths and charging stations (or hubs) commuters choose, depending on travel duration and energy consumption costs. At the middle level, a Charging Service Operator sets the charging prices at the hubs to maximize the difference between EV charging revenues and electricity supplying costs. These costs directly depend on the supplying contract chosen by the Electrical Network Operator at the upper level of the model, whose goal is to reduce grid costs. This trilevel optimization problem is solved using an optimistic iterative algorithm and simulated annealing. The sensitivity of this trilevel model to exogenous parameters such as the EV penetration and an incentive from a transportation operator is illustrated on realistic urban networks. This model is compared to a standard bilevel model in the literature (only one operator). |

14:15 | Rates of Convergences of First-Order Algorithms for Assignment Problems. PRESENTER: Juan Pablo Contreras ABSTRACT. We compare several accelerated and non-accelerated first-order algorithms for computational optimal transportation or optimal assignment problems. Up to one exception, based on the notion of |

14:45 | Majorization-Minimization algorithms : New convergence results in a stochastic setting ABSTRACT. Many problems in image processing or in machine learning can be formulated as the minimization of a loss function F whose nature is not totally deterministic. In the differentiable context, such situation arises when the gradient ∇F can only be evaluated up to a stochastic error. In this talk, we will focus our attention on the impact of noisy perturbations in the gradient on the convergence of majorization-minimization (MM) approaches. The latter consist of efficient and effective optimization algorithms that benefit from solid theoretical foundations and great practical performance [2, 4]. Our talk presents the convergence analysis for a versatile MM scheme called SABRINA (StochAstic suBspace majoRIzation-miNimization Algorithm) [1], generalizing our previous work [3]. Almost sure convergence results and convergence rate properties are obtained. The practical performances of SABRINA are illustrated by means of several numerical experiments. |

15:15 | FISTA restart using an automatic estimation of the growth parameter ABSTRACT. In this paper, we propose a novel restart scheme for FISTA (Fast Iterative Shrinking- Threshold Algorithm). This method which is a generalization of Nesterov’s accelerated gradient algorithm is widely used in the field of large convex optimization problems and it provides fast convergence results under a strong convexity assumption. These convergence rates can be extended for weaker hypotheses such as the Łojasiewicz property but it requires prior knowledge on the function of interest. In particular, most of the schemes providing a fast convergence for non-strongly convex functions satisfying a quadratic growth condition involve the growth parameter which is generally not known. Recent works show that restarting FISTA could ensure a fast convergence for this class of functions without requiring any geometry parameter. We improve these restart schemes by providing a better asymptotical convergence rate and by requiring a lower computation cost. We present numerical results emphasizing that our method is efficient especially in terms of computation time. |

14:15 | A Quadratic Regularization for the Multi-Attribute Unit-Demand Envy-Free Pricing Problem ABSTRACT. We consider a profit-maximizing model for pricing contracts as an extension of the unit-demand envy-free pricing problem: customers aim to choose a contract maximizing their utility based on a reservation price and multiple price coefficients (attributes). Classical approaches suppose that the customers have deterministic utilities; then, the response of each customer is highly sensitive to price since it concentrates on the best offer. To circumvent the intrinsic instability of deterministic models, we introduce a quadratically regularized model of customer's response, which leads to a quadratic program under complementarity constraints (QPCC). This provides an alternative to the classical logit approach, still allowing to robustify the model, while keeping a strong geometrical structure. In particular, we show that the customer's response is governed by a polyhedral complex, in which every polyhedral cell determines a set of contracts which is effectively chosen. Moreover, the deterministic model is recovered as a limit case of the regularized one. We exploit these geometrical properties to develop a pivoting heuristic, which we compare with implicit or non-linear methods from bilevel programming, showing the effectiveness of the approach. Throughout the paper, the electricity provider problem is our guideline, and we present a numerical study on this application case. |

14:45 | Optimal pricing for electric vehicle charging with customer waiting time ABSTRACT. We propose a bilevel optimization model to determine the optimal pricing for electric vehicle within a public charging station system by taking into account the waiting time of customers. We assume that the locations of the charging station are fixed, and model the waiting time without using classical queuing theory. In the upper level of the bilevel model, the decision maker sets the price of electricity and the amount of energy available at each station. The latter quantity depends on the amount of renewable energy available at each time period. In the lower level, electric vehicle users select a charging station and a time for recharging the vehicle, depending on individual preferences. We present two linear models for this problem and explore how to solve them using mixed integer bilevel optimization methods. |

15:15 | A Multi-Leader-Follower Game for Energy Demand-Side Management PRESENTER: Didier Aussel ABSTRACT. A Multi-Leader-Follower Game (MLFG) corresponds to a bilevel problem in which the upper level and the lower level are defined by Nash non cooperative competition among the players acting at the upper level (the leaders) and, at the same time, among the ones acting at the lower level (the followers). MLFGs are known to be complex problems, but they also provide perfect models to describe hierarchical interactions among various actors of real life problems. In this work, we focus on a class of MLFGs modelling the implementation of demand-side management in an electricity market through price incentives, leading to the so-called Bilevel Demand-Side Management problem (BDSM. Our aim is to propose some innovative reformulations/numerical approaches to efficiently tackle this difficult problem. Our methodology is based on the selection of specific Nash equilibria of the lower level through a precise analysis of the intrinsic characteristics of (BDSM). |

16:15 | Control of state-constrained McKean-Vlasov equations: application to portfolio selection ABSTRACT. We consider the control of McKean-Vlasov dynamics (or mean-field control) with probabilistic state constraints. We rely on a level-set approach which provides a representation of the value function of the constrained problem through an unconstrained one with exact penalization and running maximum cost. Our approach is then adapted to the common noise setting. Our work extends (Bokanowski, Picarelli, and Zidani, SIAM J Control Optim 54.5 (2016), pp. 2568–2593) and (Bokanowski, Picarelli, and Zidani, Appl Math Optim 71 (2015), pp. 125–163) to a mean-field setting. In contrast with these works which study almost sure constraints, in the case of probabilistic constraints without common noise we don't require existence of optimal controls for the auxiliary problem. The reformulation as an unconstrained problem is particularly suitable for the numerical resolution of the problem, achieved thanks to an extension of a machine learning algorithm from (Carmona, Laurière, arXiv:1908.01613, 2019). An application focuses on a mean-variance portfolio selection problem with conditional value at risk constraint on the wealth. We are also able to directly solve numerically the primal Markowitz problem in continuous time without relying on duality. |

16:45 | A multilevel fast-marching method PRESENTER: Shanqing Liu ABSTRACT. We introduce a numerical method for approximating the solutions for a class of static Hamilton-Jacobi-Bellman equations arising from minimum time state constrained optimal control problems. We are interested in computing the value function and an optimal trajectory from two given points only. We show that this is equivalent to solve the state constrained equation on the subdomain which contains the true optimal trajectories between the two points. Our algorithm takes advantage of this good property. Instead of finding the optimal trajectories directly, by solving a discretization system in a fine grid, we first approximately find the subdomain containing the optimal trajectories by solving the system in a coarse-grid. Then, we further discretize that subdomain with a fine-grid, and solve on that fine-grid the discretized system. The computation of approximated value functions on each grid is based on the fast-marching method. We show that using our algorithm, the error estimation for the computation of the value function is as good as the one obtained using the fine grid to discretize the whole domain. Moreover, we show that the number of computation operations as well as the memory allocation needed for an error of "e" is in the order of "(1/e)^d", whereas classical methods need at least "(1/e)^(2d)". Under regularity conditions on the dynamic and value functions, this complexity bound reduces to "(C^d)*log(1/e)". |

17:15 | A novel mixed-integer description within the combined use of MPC and polyhedral potential field ABSTRACT. We extend the polyhedral sum function notion to the multi-obstacle case and show that the polyhedral support of its piecewise affine representation comes from an associated hyperplane arrangement. We exploit this combinatorial structure to provide equivalent mixed-integer representations and model a repulsive potential term for subsequent use in a Model Predictive Control (MPC) formulation. Hence, our main goal is to exploit the problem structure in order to simplify the end formulation. Specifically, the main improvements are: i) expanding the sum function, as defined for the single-obstacle case in \cite{Huy2020Ocean} for a multi-obstacle environment; ii) exploit the combinatorial structure of the problem (polyhedral shapes induce a hyperplane arrangement) to provide a novel mixed-integer description of the sum function; iii) implement a repulsive potential field component in a mixed-integer MPC scheme in order to solve the navigation problem. |

16:15 | Distributed Zero-Order Optimization under Adversarial Noise ABSTRACT. We study the problem of distributed zero-order optimization for a class of strongly convex functions. They are formed by the average of local objectives, associated to different nodes in a prescribed network. We propose a distributed zero-order projected gradient descent algorithm to solve the problem. Exchange of information within the network is permitted only between neighbouring nodes. An important feature of our procedure is that it can query only function values, subject to a general noise model, that does not require zero mean or independent errors. We derive upper bounds for the average cumulative regret and optimization error of the algorithm which highlight the role played by a network connectivity parameter, the number of variables, the noise level, the strong convexity parameter, and smoothness properties of the local objectives. The bounds indicate some key improvements of our method over the state-of-the-art, both in the distributed and standard zero-order optimization settings. We also comment on lower bounds and observe that the dependency over certain function parameters in the bound is nearly optimal. |

16:45 | Méthodes de prediction de vie residuelle sur des données censurés PRESENTER: Alonso Silva ABSTRACT. Le cadre d’application de ce travail est l’estimation de durée de vie résiduelle à partir de données de taille moyenne ou de grande taille avec un sous-ensemble (10%-50%) de données censurés à droite. La tâche principale de l’analyse de survie est d’estimer la distribution de probabilité du temps jusqu’à ce qu’un événement d’intérêt se produise. Le phénomène de censure à droite se produit lorsque certains échantillons n’ont pas rencontré l’événement d’intérêt pendant la période de suivi. Nous limitons notre étude à considérer des variables explicatives endogènes et exogènes tant qu’elles sont constantes dans le temps. Les méthodes traditionnelles de régression et classification ne sont pas adaptées pour pouvoir inclure à la fois les aspects événementiels et temporels. Nous proposons de comparer des modèles paramétriques, semi-paramétriques, et d’apprentissage automatique pour estimer des durées de vie résiduelle. Nous avons mené une étude comparative approfondie sur ces différentes méthodes de prédictions de durée de vie utilisant à la fois des données publiques et des données synthétiques. Les données synthétiques permettent de disposer de la vérité terrain des échantillons censurés, de contrôler le nombre d’échantillons disponibles et le type de censure et du rapport de censure. Il ne s’agit pas ici de proposer une nouvelle solution, mais de repérer les solutions existantes les plus prometteuses et d’analyser l’impact de la censure sur les performances obtenues. |

17:15 | Experimental Comparison of Ensemble Methods and Time-to-Event Analysis Models Through Two Different Scores ABSTRACT. Time-to-event analysis is a branch of statistic that has increased its popularity during the last decades due to its many different application fields. We present in this paper a comparison between semi-parametric, parametric and machine learning methods through two different scores that were applied to three datasets. We also present an aggregation method for time-to-event analysis that in average outperforms the best predictor. Finally we present simulation results in order to enrich the comparison between the two scores while varying the number of samples and the censored data percentage. |