previous day
all days

View: session overviewtalk overview

08:30-10:00 Session 14A: Optimization and inference in transmission networks

The session is dedicated to optimization and inference problems arising in gas transmission networks. Presentations cover global optimization methods for non-linear non-convex optimization problems, analysis of network properties for simplified robust optimization and algorithms for inferring network structure and parameters from physical measurements. 

Location: Lantana A
Algorithms for Identifying Optimal Inspection Paths in Pipe Networks
SPEAKER: Thomas Chen

ABSTRACT. The inspection of deteriorating water distribution pipes is an important process for utilities. It helps them gain a better understanding of the state of the system and aids decision making regarding replacement and rehabilitation operations. Due to physical and regulatory constraints and budgetary considerations, utilities can only inspect a small fraction (~2% typically) of the system annually. There exists a need for an optimization process that enumerates candidate paths that both capture pipes that are at high risk of failure and accounts for the physical limitations of inspection tools. Such limitations include reducing the number of feature changes along the path (material, diameter etc.) to avoid numerous recalibrations, as well as avoiding any branching or looping structures. This paper examines the use of various optimization algorithms (Genetic Algorithm, Simulated Annealing, Greedy Search) in identifying inspection paths that will maximize the highest risk pipes being included while reducing the number of pipe feature changes along the path. A scoring system was developed to compare various candidate inspection paths and the optimization algorithms were applied to an example grid network as well as a virtual water distribution system. Both case studies demonstrated that genetic algorithms were most effective in identifying inspection paths satisfying the goals of capturing high-risk pipes.

Learning and Control in Physical Flow Networks

ABSTRACT. Physical Flow Networks are different infrastructure networks that allow the flow of physical commodities through edges between its constituent nodes. These include power grid, natural gas transmission network, water pipelines etc. In such networks, the flow on each edge is characterized by a function of the nodal potentials on either side of the edge. Further the net flow in and out of each node is conserved. Learning the structure and state of physical networks is necessary for optimal control as well as to quantify its privacy needs. We consider radial flow networks and study the problem of learning the operational network from a loopy graph of candidate edges using statistics of nodal potentials. Under realistic conditions on the statistics of nodal injection (consumption or production), we provide a greedy structure learning algorithm with quasilinear computational complexity in the number of candidate edges in the network. Our learning framework is very general due to two significant attributes. First it is independent of the specific marginal distributions of nodal potentials and only uses order properties in their second moments. Second, the learning algorithm is agnostic to exact flow functions that relate edge flows to corresponding potential differences and is applicable for a broad class of networks with monotonic flow functions. We demonstrate the efficacy of our work through realistic simulations on diverse physical flow networks and discuss possible extensions of our work to other regimes.

Monotonicity in Dissipative Networks and Applications to Robust Optimization
SPEAKER: Marc Vuffray

ABSTRACT. Dissipative flow networks model flow of fluids or commodities across a network. The flow dynamics on edges are governed by non-linear dissipative partial differential equations. The dynamics on adjacent edges are coupled through Kirchhoff-Neumann boundary conditions that also account for the injection parameters at the nodes. We establish a monotonicity property which states that the ordering of the initial states (e.g. density) is preserved throughout the time evolution of the system whenever the nodal injection parameters also obey the same ordering. We show that the dynamic system resulting from an appropriate choice of spatial discretization of the system of PDEs inherits this monotonicity property and can be used within simulation and optimization. We also prove a monotonicity property for dissipative networks in steady state and establish a connection between the dynamic and steady state results. These results enable significant simplification in the representation and algorithms for robust optimization and control problems under uncertain nodal injections.

08:30-10:00 Session 14B: Decomposition Methods for Optimization in Natural Resources
Location: Lantana B
On the Benefits of Enumeration for Transition Models

ABSTRACT. Pure enumeration strategies for determining an optimal solution to an integer program are not, in general, efficient. However, in certain cases, fixing a subset of variable values results in a special structure which can be exploited. Enumerating over the fixed variable values and solving many, easier problems expedites solutions relative to solving a monolith via traditional branch-and-bound. We illustrate using examples from a model designed to determine the transition from open pit to underground mining. For a strategic model, we exploit a pure longest path structure while for an operational model, we employ an algorithm designed to quickly solve the LP relaxation of a precedence-constrained knapsack problem, and combine that with a fast heuristic that produces integer solutions. In both cases, we present numerical results from real-world instances.

Stochastic Mixed Integer Programming for Design of a Microgrid under Heuristic Operation

ABSTRACT. Using stochastic integer programming, we optimize system design in the face of a restricted class of policies governing realistic system operation over a long time horizon. In particular, we focus on problems that are loosely coupled with respect to time, in which constraints that span consecutive time periods model inventory; our proposed class of policies resets the inventory levels at regular intervals. This leads to a natural decomposition yielding upper and lower bounds, which we can compute quickly. We illustrate the application of these ideas using a model that seeks to design and operate a microgrid to support a forward-operating base (FOB) in a remote location over a year-long horizon with hourly fidelity. The integer program minimizes cost under load and photovoltaic (PV) uncertainty. Because a FOB is isolated from a larger grid, our system must be sufficiently robust to provide its own power for all operations. Design decisions include the number and type of diesel generators, PV systems and batteries in use at the FOB; operations decisions for each time period include the power output of all generators and PV systems, and power allocated to charging or discharging the batteries. Finally, we extend our approach to other models in the literature with similar structure.

A constructive heuristic and bounds for optimal open-stope underground mine design

ABSTRACT. Underground mine design optimization for a top-down, open-stope retreat operation consists of determining which blocks should be designated as support pillars (and left in situ) and which should designated as stopes (and extracted and processed) to maximize profit. Constraints in making this selection consist of geotechnical restrictions (pillar stress, hydraulic radius, pillar length-to-width ratios and stope-to-pillar extraction ratios) as well as "common sense" packing constraints. Geotechnical constraints are difficult to express and the resulting problem instances are large and complicated to solve. Therefore, we devise a constructive heuristic; this method exploits our integer programming structure to produce a structurally stable pillar placement design that increases mine profitability by: (1) shifting the pillars to find the most desirable stopes to extract, (2) allowing less ore to be left in situ as a pillar, and (3) reducing the total cost of stope slotting by creating fewer pillars. We demonstrate the quality of our solutions using data from a real-world mining instance.

08:30-10:00 Session 14C: Radiation Therapy Treatment Planning II
Location: Plumeria A
A two-stage method for IMPT treatment beam angle optimization incorporating internal organ motion

ABSTRACT. Beam angle selection in intensity modulated proton therapy (IMPT) treatment planning is a large combinatorial problem. Two major issues with the currently available beam orientation selection algorithms are that selection of input parameters greatly impacts on the treatment plan quality and they require excessive computing time especially incorporating the uncertainty settings during treatment. To overcome these weaknesses observed in conventional beam angle optimization algorithm, we proposed a two-stage beam angle optimization (TSBAO) method for IMPT treatment planning. A beam angle score function incorporating uncertainty was introduced to evaluate beam angle feasibility and similarity between each possible beam orientation. For the first stage of TSBAO, the total candidate beam angles were grouped by a p-median clustering algorithm. For the second stage, a bi-level local neighborhood search (LNS) algorithm was used to search the final beam angle solution. Support vector machines (SVMs) were integrated for beam angle classification to shrink the candidate beam space. Five optimization methods: simple LNS, simulated annealing (SA), genetic algorithms (GA), hybrid SA-LNS and hybrid GA-LNS were implemented for comparison. Four clinical cases were tested and the results show that TSBAO method consistently converged to a better objective value within a short time. All plans with the beam angles selected by TSBAO method generated more uniform and robust dose distributions for the target.

A novel column-generation matheuristic to Volumetric Modulated Arc Therapy (VMAT) treatment planning
SPEAKER: Mehdi Mahnam

ABSTRACT. Radiotherapy works by damaging the genetic material within cancerous cells and during this process, normal cells are also affected; therefore, radiation is formed to the geometry shape of the tumor in such a way that it maximizes the dose of radiation to tumors, while simultaneously minimizing the adverse effects of radiations to healthy tissues. Volumetric-Modulated Arc Therapy (VMAT) is a new form of external radiation therapy which incorporates rotation of the beam around the patient’s body, while the beam is on. Thus, the treatment planning problem consists of selecting a delivery sequence of beam shapes and determining the optimal dose rate, i.e., intensity, and rotation speed of the beam radiation. In this project, a novel optimization model for the radiation therapy treatment planning in Volumetric-Modulated Arc Therapy (VMAT) is proposed in which, the gantry speed, the dose rate, and the leaf trajectories in the treatment plan are optimized, simultaneously. Then, the treatment time is minimized while the treatment quality is maximized. For that purpose, a new mixed integer programming model is proposed and then it is solved by a column-generation-based heuristic algorithm which considers both highly conformal dose distribution and computational efficiency. Moreover, a practical objective function, i.e., the total quadratic dose deviation from prescribed lower and upper bounds, is to be minimized. Finally, we evaluate the quality and efficiency of the method based on a real prostate case.

Real-time radiation treatment planning with optimality guarantees through ``cluster and bound'' methods
SPEAKER: Baris Ungun

ABSTRACT. Radiation therapy is widely used in cancer treatment; however, plans necessarily involve tradeoffs between tumor coverage and damage to healthy tissue. While current hardware can deliver custom-shaped beams from any angle around the patient, choosing (from all possible beams) an optimal set of beams that maximizes tumor coverage and while minimizing collateral damage and treatment time is intractable. Furthermore, even though planning algorithms used in practice consider highly restricted sets of candidate beams, the time per run combined with the number of runs required to explore clinical tradeoffs results in planning times of hours to days.

We propose a suite of ``cluster and bound'' methods that we hypothesize will (a) yield higher quality plans by optimizing over much (i.e., 100 times) larger sets of candidate beams, and/or (b) reduce planning time by allowing clinicians to search through candidate plans in real time. Our methods hinge on phrasing the treatment planning problem as a convex problem.

To handle large scale optimizations, we form and solve compressed approximations to the full problem by clustering beams (i.e., columns of the dose influence matrix used in the optimization) or voxels (rows of the matrix). Duality theory allows us to bound the error incurred when applying an approximate problem’s solution to the full problem. We observe that beam clustering and voxel clustering both yield excellent solutions while enabling a 10-200-fold speedup.

Development and Applications of "Dose Cloud," a Tool for Large Scale Radiation Therapy Optimization Research

ABSTRACT. It is generally difficult for optimization researchers to obtain clinical radiation therapy data. Few are able to access enough data to study the fundamental interplay between patient geometry, objective functions, weight selections, and achievable dose distributions beyond the proof of concept phase. Moreover, future automation of radiation therapy treatment planning processes will likely hinge on machine learning methods, such as deep learning, which require large amounts of data to perform well. Therefore, we developed a tool to generate arbitrarily large sets of realistic radiation therapy data for optimization, automation, and machine learning research. The key components of the tool are an anatomical shape model and a shape-based radiation dose calculation algorithm. A large set of virtual patient anatomies are created through a weighted sum of principal components which have been computed from a small set of real patient shape model data. These principal components contain no patient specific information and can be shared freely among the research community. For each virtual patient, a dose (or system) matrix is generated using its body shape as input to our novel dose calculation algorithm. The dose matrix and patient anatomy alone are sufficient to compute any arbitrary radiation therapy dose optimization. With a large collection of these virtual patient dose matrices, or what we call a "Dose Cloud," we conduct a few proof of concept studies to demonstrate the tool's utility and propose some strategies for the future of automated radiation therapy treatment planning based on large scale optimization and machine learning.

08:30-10:00 Session 14D: Studies in Quality of Care Improvement
Location: Plumeria B
Hospital Payment Schemes Under Competition
SPEAKER: Zheng Han

ABSTRACT. We consider two hospitals competing as profit-maximizers by choosing their own quality levels while operating under different payment schemes (i.e., bundled payment vs. fee-for-service). A game-theoretic model is built to derive and categorize all possible equilibrium outcomes. We further provide analysis on equilibrium operating parameters and then develop intuitions for healthcare policies which are presented with numerical illustrations.

Optimal Inpatient Discharge Planning Under Uncertainty

ABSTRACT. We study the inpatient discharge planning problem to enable efficient design of optimal discharge plans on a daily basis. If some of the discharge processes are delayed, the ensuing backup in the upstream units will cause inpatient admission delays. Hence, it is critical to tradeoff competing issues of upstream patient boarding (e.g. Emergency Department (ED) boarding), inpatient discharge lateness, and Inpatient Unit (IU) workload integration. We develop a novel two-stage stochastic programming model with uncertain IU discharge processing time and IU bed request time. Using data from a Texas hospital, we calibrate our model and fine-tune our solution method.

The Impact Of Implementing Full Capacity Protocol On The Operational Performance In An Emergency Department
SPEAKER: Mazhar Arikan

ABSTRACT. Full capacity protocol (FCP) is a set of guidelines that coordinates the patients flow when the emergency department (ED) is overcrowded. Utilizing data from a large urban teaching hospital, we show that the operational performance of the study ED is improved after the implementation of FCP. Furthermore, we find additional improvement when the FCP is triggered. We propose recommendations to further improve the operational outcomes under FCP.

08:30-10:00 Session 14E: Optimization in urban transportation

This session covers optimization models arising in urban transportation, including speed optimization and vehicle routing.

Location: Camellia A
Optimal Policies for Risk-Averse Electric Vehicle Charging with Spot Purchases
SPEAKER: Daniel Jiang

ABSTRACT. We consider the sequential decision problem faced by the manager of an electric vehicle (EV) charging station, who aims to satisfy the charging demand of the customer while minimizing cost. Since the total time needed to charge the EV up to capacity is typically less than the amount of time that the customer is away, there are opportunities to exploit electricity spot price variations within some time window. However, the return time of the customer is uncertain, so there exists the risk of an insufficient charge. We formulate the problem as a finite horizon Markov decision process (MDP) and consider a risk–averse objective function by optimizing under a dynamic risk measure constructed using a convex combination of expected value and conditional value at risk (CVaR). For the first time in the literature, we provide an analysis of the effect that risk parameters, e.g., the risk–level α used in CVaR, have on the structure of the optimal policy. We show that becoming more risk–averse in the dynamic risk measure sense corresponds to the intuitively appealing notion of becoming more risk–averse in the order thresholds of the optimal policy. This result allows us to develop computational techniques for approximating a spectrum of risk–averse policies generated by varying the parameters of the risk measure. Finally, numerical results for a case study using spot price data from California ISO (CAISO) are shown, where the Pareto optimality of our policies when measured against practical metrics of risk and reward is examined.

The speed optimization problem over a fixed route

ABSTRACT. Speed control provides one layer of flexibility in transportation operations, especially in helping meet service level constraints, improve energy efficiency, and reduce greenhouse gas emissions. The implementation of speed control is prevailing with the increasing market penetration of connected and automated vehicles. Thus the speed optimization problem (SOP) appears as an important sub-problem in many complex transportation and logistic practices. The goal of the SOP is find speed over each leg of a route to minimize the total cost over the route, while respecting speed limits and service time-window constraints. The cost over each leg is assumed to be a general convex function of the travel speed. The SOP can be formulated as a min-convex-cost network flow problem. We design an algorithm that runs in quadratic time in the number of nodes on the route. Computer experiments show that the algorithm is very efficient in solving large-size instances.

Vehicle Routing Problems with Time Windows and Convex Node Costs
SPEAKER: Yongjia Song

ABSTRACT. We consider a variant of the vehicle routing problems with time windows, where the objective includes the inconvenience cost modeled by a convex function of service start time on each node. We formulate this mixed integer convex program using a novel set partitioning formulation, by considering all combinations of routes and block structures over the routes. We apply a branch-and-price framework to solve this formulation. To solve the pricing problem, we leverage a recursive algorithm that solves the optimal service time scheduling problem given a fixed route, and develop a labeling algorithm with a dominance rule that is easy to check. In our computational study, we show the effectiveness of the labeling algorithm compared to state-of-the-art mixed integer convex programming solvers on an extensive set of vehicle routing problems with time windows and quadratic inconvenience costs.

08:30-10:00 Session 14F: Assessing and Improving Solution Resilience
Location: Camellia B
Bridging sociotechnical networks for critical infrastructure resilience: South Korean Case Study

ABSTRACT. Although critical infrastructure resilience depends on the functionality of technical components and the actions taken by people to adapt to unforeseen events, the majority of empirical work analyzes infrastructures isolated from their social context limiting the practicality of derived conclusions. Network science has emerged as one approach to model infrastructure systems, yet the disparate nature of the nodes, links, and flows among social and technical networks limit the feasibility of combining both into a single analysis. Here, we study how social and technical networks influence each other by linking a network model of blackout management in South Korea (Korea hereafter) to a corresponding electric power grid network. First, we develop am interorganizational network via Korean crisis management protocols and use betweenness to determine organizational contribution to information sharing and crisis coordination during blackouts. Second, we implement cascading failure models in a Korean power grid network and generate time-varying sub-networks for blackout response by relating predicted infrastructure losses to organizations. We analyze the resulting social networks to characterize which organizations are most impacted by blackouts and which organizations contribute to crisis coordination across multiple scenarios. Results show that Korean power companies receiving equivalent treatment in crisis protocols are affected by blackouts in markedly different ways. Also, the comparison between static and time-variant analyses indicate that the roles of organizations shift depending on methods used. Taken together, this work generates new understanding of how blackouts influence interactions among Korean organizations and is a first step towards sociotechnical network analysis.

The impact of prior information on the quantification of resilience-based importance measures using Bayesian kernel methods
SPEAKER: Hiba Baroud

ABSTRACT. Critical infrastructure systems have become increasingly vulnerable to disruptive events resulting from natural hazards, extreme weather, accidents, and manmade attacks, among others. The ability of an infrastructure to withstand these disruptive events and recover rapidly with the minimum amount of damages and losses is vital to a society’s welfare and safety in the event of disasters. As a result, modeling the resilience of critical infrastructure systems became of interest to risk managers and decision makers, and evidently researchers. The most recent advances in resilience modeling are approaches that utilize qualitative tools or simulation that rely heavily on assumptions of prior knowledge, probability distributions, and potential risk scenarios. This work analyzes the impact of prior knowledge of decision makers on the accuracy and interpretability of quantitative methods for modeling resilience. More specifically, the analysis is focused on data-driven Bayesian models, such as Bayesian kernel methods, that are used to estimate resilience-based importance measures. Integrating Bayesian methods with kernel methods has recently garnered attention, as this tool (i) consists of a classification algorithm which provides probabilistic outcomes as opposed to deterministic outcomes, and (ii) allows decision makers to gather information from multiple sources such as experts in the field, historical events, and characteristics of the system to make informed decisions. As such information is often challenging to gather, this work investigates the definition of prior probability distributions in Bayesian kernel models and its impact on estimating resilience metrics. The outcome of the research will guide data collection to improve future resilience modeling.

Assessing power system resilience to adverse weather events
SPEAKER: Andrea Staid

ABSTRACT. The electric power system is vulnerable to both man-made and natural hazards. Consequences from resulting failures can have huge financial, social, and health impacts. The ability to better withstand and recover from adverse events will result in fewer service disruptions and lower costs in the long run. In order to improve power system resilience, we must first understand the critical threats, potential damage, and how consequences can propagate through the system. From there, we can work to mitigate these consequences through planning and operational decision-making. Here we focus on improving resilience to adverse weather. Namely, we seek to reduce the loss of electric load to customers and consider the consequences in order to assess current system resilience and identify improvements. We use actual data from a large electric utility company to develop realistic scenarios of adverse weather and resulting outages seen on their network. These scenarios are fed into a stochastic programming formulation using a DC optimal power flow model. We evaluate several different mitigation strategies and characterize the impact on system resilience. We run the model for both baseline and extreme weather scenarios. Realistic scenarios based on historical data increase confidence in the results and in the effectiveness of implementing mitigation strategies. We also highlight the challenges of data availability and of incorporating real data into optimization models. The level of detail at which the data is collected directly determines the rigor of the scenarios and the confidence with which they can be used for decision-making.

Improving Power Systems Resilience to Geomagnetic Disturbances (GMDs)

ABSTRACT. Solar storms, or geomagnetic disturbances (GMD), are high-impact low-frequency (HILF) events that can wreak havoc on electric power grids. In this presentation we describe the formulation of a stochastic non-linear program for mitigating the lack of sufficient reactive power support caused by GMDs. This stochastic program is applied to electric power system for long-term planning, and short-term operations. Demonstration of the optimization results applicability for several power systems test cases leverages a resilience quantification framework previously proposed by Sandia National Laboratories.

08:30-10:00 Session 14G: Controlling Power Flows
Location: Verbana
Robust AC optimal power flow

ABSTRACT. Dealing with the uncertainty in power systems brought by renewable energy integration is a major challenge facing the electricity industry in the transition to a sustainable future. Stochastic and robust optimization based economic dispatch (ED) models are the forerunners for a new generation of operational schemes for future power systems with significant wind and solar power. Despite the extensive research efforts, most of the existing stochastic and robust dispatch models rely on the DC power flow model. As the renewable penetration deepens into all levels of power systems, it becomes crucial to accurately model the true physical laws in the power flow equations through the AC power flow model, especially during the real-time ED process. However, it is a very challenging computational and modeling problem to deal with the nonlinear, non-convex, and large-scale of the AC power flow model in an ED model under uncertainty. In this talk, we present some recent progress in robust AC optimal power flow, including new uncertainty set models and strong convex relaxations.

Voltage regulation using limited communication

ABSTRACT. The rapid growth of power generated from renewable sources demands increasingly sophisticated voltage regulation in distribution networks. Unfortunately, recent research shows that local controllers regulating the voltage through reactive power generation generally fail in achieving the desired regulation, unless there is some communication between the controllers. However, it is unclear how much information the local controllers need to communicate in order to achieve the desired voltage regulation. To determine the required level of communication, we investigate a general class of controllers able to (i) measure the local voltage magnitudes and (ii) communicate limited number of bits to the physical neighbors. We investigate how these controllers can achieve the desired regulation asymptotically by communicating only few bits of information. Moreover, we study upper bounds on the communicated bits needed to ensure that the desired regulation is approximately achieved up to a predefined accuracy.  We also study how the convergence speed of these controllers is improved when more bits are communicated before each control action.  Finally, we illustrate the results in numerical simulations.  

Communication-Cognizant Hybrid Voltage Control

ABSTRACT. Voltage regulation in distribution networks is challenged by increasing penetration of distributed energy resources (DERs). This paper develops a hybrid voltage control (HVC) strategy consisting of both local and distributed designs to optimally set the reactive power outputs from DERs' power-electronics interfaces. Towards a specially-designed centralized objective, we propose to improve existing distributed control strategies using only neighborhood information exchanges by incorporating instantaneous local voltage measurements. Our design can dynamically adapt to system operating conditions while being cognizant to varying communication scenarios. Interestingly, the HVC framework naturally boils down to a surrogate local control scheme under the worst-case scenario of a total link failure.

Dynamic Electric Transmission Expansion considering time development uncertainty
SPEAKER: Juan Andrade

ABSTRACT. Nowadays there is an increasing interest into the development of utility scale renewable generation, which imposes the need to develop new transmission infrastructure. On the other hand, due to negative perception for this type of infrastructure, its development is a contentious process. As a consequence, transmission projects are delayed in an uncertain way. These delays uncertainties might produce increases in the operational costs of power systems, as well as affect system reliability. These negative impacts could be reduced by introducing mitigation considerations in the design of transmission infrastructure, which increases capital costs. Unfortunately, traditional approaches for transmission expansion optimization do not quantify the value of reducing this uncertainty. This paper presents a multi-stage with recourse stochastic formulation for the dynamic transmission expansion problem introducing the time uncertainty associated with the development of new transmission. The formulation allows the selection of different transmission expansion alternatives, and the timeline required. The problem is solved by using Progressive Hedging Algorithm.

10:00-10:30Coffee Break
10:30-11:30 Session 15: Markov Decision Processes in Healthcare

Tutorial by Prof. Andrew Schaefer

Location: Primrose C
13:00-14:30 Session 16B: Networks, Attackers, and Defenders
Location: Lantana B
Interdiction of a Mixed-Integer Linear System
SPEAKER: Bowen Hua

ABSTRACT. A system interdiction problem can be modeled as a bilevel program in which the upper level models interdiction decisions and the lower level models system operation. This paper considers systems whose operation is modeled as a mixed-integer linear program. We first show the existence of an optimal solution to this mixed-integer linear system interdiction problem (MILSIP) under a mild assumption. To solve large-scale instances of MILSIP, we designate the upper-level variables as complicating and decompose MILSIP. Nonconvexity of the lower-level problem results in an exponential number of subproblems. Therefore, instead of solving all subproblems, we propose an exact solution method that iteratively solves a relaxed master problem, a lower-level problem, and one of the exponentially many subproblems that might be of most help for convergence. We illustrate the the proposed algorithm using a small example, and then demonstrate its computational capabilities on a natural gas pipeline system of which 33 components are interdictable. Optimal solutions are obtained in a few seconds on a personal computer.

PLADD: Deterring Attacks on Cyber Systems and Moving Target Defense
SPEAKER: John Siirola

ABSTRACT. We describe a new game, called PLADD (Probabilistic Learning Attacker and Dynamic Defender), designed to represent the interactions of stealthy attacker and dynamic defender, and apply PLADD to understanding the effectiveness of moving target defense (MTD). PLADD is an incomplete information game-theoretic model of the attacker-defender interaction. We start by building upon an existing model called FlipIt, extending it into a scenario involving a probabilistic attacker and defender playing for control over a resource. We demonstrate analytically using martingales approach that the defender has an ability to control the attacker payoff and to push a rational attacker out of the game by making the attacker payoff sufficiently negative. We also prove analytically that such a defender strategy exists under a broad set of conditions. We discuss the situations when MTD is beneficial to the defender and when it is not and present an unexpected result that MTD may not be cost effective for a large set of parameters. We compare the analytical solution to a simulation and extend the simulation for cases that cannot be treated analytically.

Optimizing the Recovery of Disrupted Multi-Echelon Assembly Supply Chain Networks

ABSTRACT. We consider the problem of recovering multi-echelon assembly supply chain networks from large-scale disruptive events. Each supplier within this network assembles a component from a series of sub-components received from other suppliers. Our motivating application of this problem is in defense aircraft manufacturing, whose networks often have single-sourced components. This implies that disruptions to suppliers will impact the network will impact the ability of the network to produce the final product. We show that scheduling rules applied locally at each supplier can optimize key recovery metrics including minimizing the maximum tardiness of any order for the final product and minimizing the time to recover from the event (i.e., the time at which there is no longer any late orders). Our approaches are applied to a data set modeling an industry partner’s supply chain in order to show the impact of traditional mitigation strategies on the network’s ability to recover from disruptions.

13:00-14:30 Session 16C: New Algorithms and Software for Optimization
Location: Plumeria A
Polynomial Norms and Semidefinite Optimization
SPEAKER: Georgina Hall

ABSTRACT. We establish when the d-th root of a multivariate degree-d polynomial produces a norm. In the quadratic case, it is well-known that this is the case if and only if the matrix associated with the quadratic form is positive definite. We present a generalization of this result to higher order polynomials and show that there is a hierarchy of semidefinite programs that can detect all polynomial norms.

SchurIpopt: A Parallel Optimization Package For Structured Nonlinear-programming Problems

ABSTRACT. Interior-point methods have proven to be effective for solving large-scale nonlinear programming problems. The dominant computational steps in an interior-point algorithm are the solution of the KKT system and the computation of NLP functions and derivatives in every interior-point iteration. In this work, we develop SchurIpopt, a parallel extension to Ipopt that solves large-scale, structured nonlinear programming problems efficiently on shared memory and distributed memory parallel architectures.

Our implementation uses a Schur-complement decomposition strategy to exploit the structure of NLP problems arising in multi-scenario and dynamic optimization applications. We achieve high parallel efficiency by parallelizing the solution of the KKT system and related scale-dependent vector-vector and matrix-vector operations that take place within each interior-point iteration. We interface SchurIpopt with PySP — a Python-based software package for modeling stochastic programming problems, which is an extension of the open-source AML Pyomo (pyomo.org). Such an interface allows full parallelization of the optimization workflow on distributed architectures, including model generation, solver execution, and solution interrogation, all within a high-level AML environment.

Efficient Validation of Basic Solutions via the Roundoff-Error-Free Factorization Framework (WITHDRAWN)

ABSTRACT. This work makes key derivations relating the roundoff-error-free (REF) LU factorization framework to the exact rational arithmetic LU factorization approach used currently by state-of-the-art exact linear and mixed-integer programming solvers to validate basic solutions. Most significantly, it features several computational tests demonstrating that the integer-preserving REF factorization framework solves systems of linear equations considerably faster than the exact rational arithmetic LU factorization approach while requiring half the memory. Moreover, this work develops and analyzes an efficient basic solution validation implementation of Edmonds' Q-Matrices. An additional set of experiments demonstrates that the REF factorization framework remains significantly more efficient than this alternative integer-preserving approach in terms of memory requirements and computational effort. Altogether, the featured contributions demonstrate that the REF factorization framework should be the go-to tool for basic solution validation in exact linear programming.

13:00-14:30 Session 16D: Novel Methods for Optimization Under Uncertainty
Location: Plumeria B
Bi-level Stochastic Approximation

ABSTRACT. We propose a bi-level algorithm to solve stochastic optimization formulations with a certain decomposable structure. Consider, for example, the problem of maximizing total revenue by jointly maximizing unit sales, subject to non-linear market conditions, and minimizing costs, which for fixed sales is a two-stage linear program (2SLP). An outer loop runs stochastic approximation (SA) accounting for the non-linear part. An inner loop solves the 2SLP. The gradient of the objective function of the 2SLP is used in the outer SA, and is obtained using parametric programming. We analyze the convergence of this bi-level SA, and provide experimental evidence of its efficacy on an energy-domain problem.

A modified SDDP method for a risk-averse high-dimensional multistage stochastic energy storage optimization problem

ABSTRACT. We propose a stochastic dual dynamic programming (SDDP) method aimed at a risk-averse, high-dimensional multistage stochastic optimization problem where the underlying stochastic processes are generated using a two-level Markov model. This model accurately replicates both heavy tails and crossing times which reflect how long a process is above or below a forecast. To build cuts our method dynamically samples from the full probability space and handles information processes that are state dependent, employing both a risk-adjusted sampling scheme and a quadratic regularization term designed for long-horizon problems. Traditional SDDP works usually consider a sampled version of the true problem, assuming inter-temporal independence.

We present computational experiments for a rich application domain, namely, optimizing energy storage and release decisions for a set of batteries scattered across the energy grid. With increased use of renewables and falling cost of storage, we anticipate having to optimize across several hundred batteries. With the fine-grained time scale of battery storage, we also have to optimize over hundreds of time periods. Using carefully calibrated stochastic models of heavy-tailed wind, solar and price processes, we introduce and test a new risk-adjusted sampling scheme to minimize both expected costs as well as different risk measures, including both CVaR and threshold risk.

Dual approximate dynamic programming for multi-stage stochastic unit commitment
SPEAKER: Jim Luedtke

ABSTRACT. We study the multi-stage stochastic unit commitment problem in which commitment and generation decisions can be made and adjusted in each time period. We formulate this problem as a Markov decision process, which is “loosely coupled” in the sense that if the demand constraint is relaxed, the problem decomposes into a separate, low-dimensional, Markov decision process for each generator. We demonstrate how the dual approximate dynamic programming method of Barty, Carpentier, and Girardeau (2010) can be applied to obtain bounds and a policy for this problem, and present numerical results.

Computing Bilevel Optimization Subject to Uncertainty

ABSTRACT. In this talk, we consider bilevel optimization with uncertainty, i.e., a robust optimization extension of the deterministic bilevel optimization. Formulations, structural properties, and computational methods will be presented. Some real applications will also be demonstrated.

13:00-14:30 Session 16E: Stochastic Optimization Methods
Location: Camellia A
Stochastic Optimization with Decisions Truncated by Random Variables and Its Applications

ABSTRACT. A common technical challenge encountered in many operations management models is that decision variables are truncated by some random variables and the decisions are made before the values of these random variables are realized, leading to non-convex minimization problems due to the truncation. To address this challenge, we develop a powerful transformation technique which converts such a non-convex minimization problem to an equivalent convex minimization problem. The transformation enables us to prove the preservation of some desired structural properties, such as convexity, submodularity and L-natural-convexity, under optimization operations, which are critical for identifying the structures of optimal policies and developing efficient algorithms. We then demonstrate the applications of our approach to several important models in inventory control and revenue management: a multiproduct inventory model with substitution and random supply capacity, and capacity allocation in network revenue management.

Solving Large-scale Stochastic Programs Using Generalized Value Function

ABSTRACT. This work considers two stage mixed-integer programs with discretely distributed stochastic right-hand sides and objective functions. We present an equivalent superadditive dual formulation that uses the value functions of both stages. We introduce the generalized value function of a mixed-integer program simultaneously parametrized by its objective function and right-hand side. We describe fundamental properties of the generalized value function and three algorithms to calculate it. Then, we present a global branch-and-bound algorithm for solving one stage pure integer and one stage mixed integer program. We conclude with computational results.

Dual Kernel Embedding for Stochastic Optimization and Reinforcement Learning

ABSTRACT. While reinforcement learning received much attention recent years to address Markov decision problems, most existing reinforcement learning algorithms are either purely heuristic or lack of efficiency. A key challenge boils down to the difficulty with learning from conditional distributions with limited samples and minimizing objectives involving nested expectations. We propose a novel stochastic-approximation-based method that benefits from a fresh employ of Fenchel duality and kernel embedding techniques. To the best of our knowledge, this is the first algorithm that allows to take only one sample at a time from the conditional distribution and comes with provable theoretical guarantee. Finally, the proposed algorithm achieves the state-of-the-art empirical performances on several benchmark datasets comparing to the existing algorithms such as gradient-TD2 and residual gradient.

A Polyhedral Approach For Two-Sided Chance-Constrained Programs

ABSTRACT. We study two-sided chance-constrained programs with a finite probability space. We reformulate this class of problems as a mixed-integer program. We study a polyhedral substructure of the reformulation and propose valid inequalities that define the convex hull of solutions. Finally, we report our computational experience with the proposed inequalities.

13:00-14:30 Session 16F: Routing and Delivery in Networks
Location: Camellia B
Quality in Competitive Fresh Produce Supply Chains with Application to Farmers' Markets
SPEAKER: Deniz Besik

ABSTRACT. Fresh produce supply chains have special characteristics, notably, that the quality of the product (fruit or vegetable) deteriorates continuously over time, even under the best conditions. In this paper, we first present explicit formulae for fresh produce quality deterioration based on chemistry and temperature. We then focus on farmers' markets. The popularity of farmer's markets has been growing internationally due to consumers' greater awareness of and interest in product quality, emphasis on health, and also reducing the environmental impacts associated with supply chains. Farmers' markets, as examples of direct to consumer channels and shorter supply chains, are then studied in the framework of game theory in both uncapacitated and capacitated versions. Illustrative examples in the case of peaches and various scenarios, including production disruptions, in a case study of apples in Massachusetts, provide quantitative evidence of the applicability of our supply chain network approach.

Non-Aggressive Adaptive Routing in Traffic

ABSTRACT. Routing a person through a traffic network presents a dilemma to choose between a fixed route which is an easier to navigate route and an (aggressive) adaptive route which minimizes the travel time by adjusting to the traffic conditions. We propose to investigate methods for non-aggressive, adaptive routing that is middle-ground seeking the best of both these extremes, i.e. adaptive routes restricted in number of route shifts allowed at a critical juncture, and propose tractable algorithms to solve different routing models.

Simultaneous Constraint Relaxation in Genetic Algorithms for Capacitated Facility Location Problems with Delivery Time Constraint (WITHDRAWN)
SPEAKER: Ozgur Araz

ABSTRACT. This study presents algorithms for optimizing facility location and resource allocation decisions with delivery time constraints. Often decision makers are faced with infeasibility due to lack of time or resources for designing robust service systems, e.g., in public health service systems. Therefore, it is challenging to obtain optimal solutions under multiple system constraints, especially when the problem is already NP hard. We present a genetic algorithm (GA) based heuristic solution and a series of modifications in the algorithm that would present flexibility to decision makers in terms of relaxing constraints individually and simultaneously, depending on the objectives of the designed system. Computational experiments are presented for a case study of public health emergency response with p-median formulation of the location-allocation problem and queuing approximations.

13:00-14:30 Session 16G: Applications of Optimization: Health, Energy, and Economics
Location: Verbana
Global Solution Strategy for AC Network-Constrained Unit Commitment (NCUC) Problems
SPEAKER: Jianfeng Liu

ABSTRACT. The unit commitment (UC) problem plays an important role in power systems operations. As an extension of the traditional UC model, a network-constraint unit commitment (NCUC) problem considers constraints arising from the underlying transmission network. For real-world applications, unfortunately, the resulting NCUC problem can become extremely large and challenging to solve efficiently. Consequently, linear approximations of the NCUC problem are typically solved in practice. However, these approximation approaches may yield unreliable and even infeasible generating schedules since they fail to explicitly consider the non-linear features of the actual alternating current (AC) transmission system.

In this paper, we propose a global solution framework for the NCUC problem incorporating a nonlinear AC transmission model, which is formulated as a mixed-integer quadratically constrained quadratic programming (MIQCQP) problem. To solve this non-convex NCUC problem to global optimality, we present a multi-tree optimization algorithm based on iterative solutions between a mixed-integer, lower bounding master problem and a nonlinear programming (NLP) upper bounding subproblem. In particular, the master problem is a convex relaxation of the original NCUC problem. Inspired by the second-order cone (SOC) relaxation, three candidate convex relaxations are presented, which are formulated as mixed-integer second-order cone programming (MISOCP) and mixed-integer linear programming (MILP) problems. Additionally, to complete our algorithm, we propose a sub-algorithm to solve the non-convex NLP subproblem, a multi-period AC optimal power flow (OPF) problem, to global optimality. Numerical results on four benchmark problems have shown encouraging performance of this global solution framework.

The $k$-Variable Adjudication Method ($k$-VAM) for Reducing Regression-Model Dimensionality and Its Application to State-Level Economic-Growth-Factor Identification
SPEAKER: Richard Barr

ABSTRACT. In applications of least-squares and $L_1$regression, a common regularization goal is a limit on the number of variables (from a larger set of possible independent factors) to be included in the resulting model. Such a constraint can be employed to overcome issues of over-fitting and underdetermined systems of equations. Principal components analysis, ridge regression, the least absolute shrinkage and selection operator (Lasso), and other variable-selection and regularization methods have attempted to address these issues. The $k$-Variable Adjudication Methodology, or $k$VAM, is a non-parametric approach that directly limits the cardinality of a regression model's included independent-variable set through standard optimization techniques. The model formulation is flexible and can be readily extended to introduce other desired regularizations.

The value of $k$VAM is demonstrated in a study to identify the key drivers of economic growth at the state level from a new database of over 50 years of annual state-level observations of 486 growth-related variables. The results provide policymakers new insights into the underpinnings of Gross State Product (both short-term and long-term) in the two largest states, California and Texas, and at the U. S. national level.

Using Density to Identify Fixations in Gaze Data: Optimization-Based Formulations and Algorithms

ABSTRACT. Eye tracking is an increasingly common technology with a variety of practical uses. Eye-tracking gaze data can be categorized into two main events: fixations, which represent attention, whereas saccades occur between fixation events. We propose a novel manner to identify fixations based on their density, which concerns both the fixation duration as well as its inter-point proximity. We develop two mixed-integer nonlinear programming formulations and corresponding algorithms to recover the densest fixations in a data set. Our approach is parameterized by a unique value that controls for the degree of desired density. We conclude by discussing computational results and insights on real data sets.