ICS2017: 2017 INFORMS COMPUTING SOCIETY CONFERENCE
PROGRAM FOR MONDAY, JANUARY 16TH
Days:
previous day
next day
all days

View: session overviewtalk overview

08:30-10:00 Session 9A: Models/Analytics for Decision Making in Healthcare

Models/Analytics for Decision Making in Healthcare

Location: Lantana A
08:30
Coordinated Appointment Scheduling for a Integrated Practice Unit

ABSTRACT. In this research, we develop a coordinated approach to patient appointment scheduling that enables a patient to receive multiple services on a single visit. The approach is compared to heuristics used in practice. A case study in pre-operative care involving the integration of Anesthesiology and Internal Medicine is used to motivate and illustrate the results.

09:00
Jumping the Line, Charitably: Analysis and Remedy of the Donor-Priority Rule
SPEAKER: Ronghuo Zheng

ABSTRACT. In the United States, the growth of organ transplant waiting lists significantly outpaces the supply of donated cadaveric organs, stimulating discussions of a myriad of initiatives aiming to boost the rate of registered organ donors. Among these initiatives, the donor-priority rule grants a registered organ donor priority in receiving a transplant should he or she need one. In this paper, we adopt a strategic queueing theoretic framework and quality-adjusted life expectancy (QALE) criteria to model individuals' decisions to join the donor registry, which entails a tradeoff between the benefit from potential organ transplants and the cost of donating. Characterizing the equilibrium before and after introducing the donor priority rule, we show the following: (1) assuming homogeneity in population health, introducing the donor-priority rule always leads to improved social welfare, as is consistent with the literature; (2) after incorporating heterogeneity in health status, we show that in contrast to the literature, social welfare can decline due to the donor-priority rule, because of unbalanced incentives associated with different types of individuals. We also propose a simple freeze-period mechanism, and prove that in conjunction with the donor-priority rule, it can increase the supply of donated organs without compromising the average quality of the donor pool.

09:30
Predictive Models For Making Patient Screening Decisions

ABSTRACT. A critical dilemma that clinicians face is when and how often to screen patients who may suffer from a disease. The stakes are heightened in the case of chronic diseases that impose a substantial cost burden. We investigate the use of predictive modeling to develop optimal screening conditions and assist with clinical decision-making. We use electronic health data from a major U.S. Hospital and apply our models in the context of screening patients for type-2 diabetes, one of the most prevalent diseases in the U.S. and worldwide.

08:30-10:00 Session 9B: Integer Programming

Integer Programming - Healthcare Applications

Location: Lantana B
08:30
Using Optimization to Define Unbiased Treatment Effect Estimators in Observational Studies

ABSTRACT. Matching is used to estimate treatment effects in observational studies in the social science, public health, and medicine. One mechanism for overcoming its restrictiveness is to relax the exact matching requirement to one of balance on the covariate distributions for the treatment and control groups. The Balance Optimization Subset Selection (BOSS) model is introduced to identify a control group featuring optimal covariate balance. This presentation discusses conditions under which the BOSS model yields unbiased estimators for the treatment effect. These conditions are a function of the covariate, treatment response relationship, which in turn impacts the type of objective function that can be used for the BOSS model.

09:00
A swapping heuristic for daily nurse scheduling in operating suites
SPEAKER: Gino Lim

ABSTRACT. This paper presents an efficient solution framework for daily scheduling of nurses in operating suites. This framework consists of two core optimization models that are necessary for scheduling OR nurses in the clinic. The first model addresses the multi-objective optimization problem of assigning nurses to upcoming surgery cases based on their specialties and competency levels. The second model is designed to generate lunch break assignments for the nurses once their caseloads are determined. Because the multi-objective model is too large to solve using commercial software, we developed both a column generation algorithm and a two-phase swapping heuristic to find feasible assignments in a fast manner. For both approaches, initial solutions are obtained with a restricted model. Experiments were conducted to determine the value of the models and the performance of the algorithms using real data provided by MD Anderson Cancer Center in Houston, Texas.

09:30
4+1 Templates for Annual Block Scheduling
SPEAKER: Jonathan Bard

ABSTRACT. Internal Medicine residency programs have traditionally been structured around monthly blocks where a different rotation is assigned to each block over the year. A subset of those rotations carry the requirement of one or two half-day sessions of clinic duty per week. In the last several years, a growing number of Internal Medicine residency programs have moved away from this traditional structure and have adopted an “4+1” template, where the resident spends 4 weeks on a rotation without clinic duty and then 1 week mostly in clinic. This talk addresses the 4+1 annual block scheduling problem as adopted by the Department of Medicine at the University of Texas Health Science Center in San Antonio (UTHSC-SA). We develop a series of optimization models that can be used to construct individual block schedules for the academic year and to assign clinic sessions to the residents during their ambulatory week. The objective is to balance the workload during the half-day clinic sessions and to ensure that each resident receives roughly the same training experiences over their program. Once the blocks are known, a second optimization problem is solved to determine individual clinic session assignments.

08:30-10:00 Session 9C: Power System Planning and Operations

Energy Session

Chair:
Location: Plumeria A
08:30
Computing Multiple Local Optima for OPF Problems using an Elliptical Tracing Algorithm

ABSTRACT. The solution to an optimal power flow (OPF) problem provides a minimum cost operating point for an electric power system. OPF problems are generally non-convex and may have multiple local optima. This presentation describes a continuation tracing method that computes multiple local optima for OPF problems. The Fritz John necessary conditions for local optimality are written in an elliptical form that ensures boundedness of all continuation traces. Starting from one solution to the Fritz John conditions, a branch tracing method finds multiple local optima by moving along one-dimensional manifolds formed by individually varying the right hand sides of each high-dimensional ellipse. The effectiveness of the proposed tracing method is demonstrated by finding all local optima (and thus the global solution) for two small test cases with known feasible spaces.

08:52
Applying a New Scenario Decomposition Algorithm to Solve Stochastic Unit Commitment Problems
SPEAKER: Kevin Ryan

ABSTRACT. A recently proposed scenario decomposition algorithm (Ahmed 13) for stochastic 0-1 programs finds an optimal solution by evaluating and removing individual solutions discovered by solving scenario subproblems. This technique has been shown to be very promising for smaller stochastic 0-1 problems from the SIPLIB testbed. We develop techniques for applying an asynchronous parallel implementation of this algorithm to difficult stochastic unit commitment problems with many first stage variables and a moderate number of scenarios. Computational results from large scale problems are presented.

09:14
Improving Power Systems Resilience with Stochastic Optimization
SPEAKER: Michael Bynum

ABSTRACT. Electricity is one of the three components of the energy sector, one of sixteen critical infrastructure sectors defined by U.S. Presidential Policy Directive 21. Resilience of the electricity grid is vital for both the economy and public safety and health. Improving the resilience of the system to a wide range of scenarios (i.e., potential future events) is a challenging problem due to the inherent uncertainty involved. Additionally, nominal operations of the system, where the primary manipulated variables are generator setpoints, is generally not enough to make systems resilient to extreme events which disable many system components. Other operational and planning decisions, such as transmission switching or physical protection of critical components, must be considered, adding even more complexity to the problem through the introduction of binary variables.

In this work, we present two-stage stochastic programming formulations for minimizing the consequence of extreme weather events to the electricity grid. We utilize generator dispatch, transmission switching, and physical protection of transmission lines as first stage decisions for resilience improvement, and we compare the effectiveness of each, along with combinations of the three. The resulting formulations are large-scale stochastic mixed-integer linear programs. The optimization problem is formulated with Pyomo, a flexible, python-based optimization modeling language. The analysis is performed on a real system with thousands of buses and transmission lines, and the scenarios used for the stochastic programming problem are generated with historical outage data.

09:37
Long-term Microgrid Design in the Context of Changing Policy Objectives in Power System Planning

ABSTRACT. Integrated resource planning faces enormous challenges and obstacles such as, future load growth in the face of uncertainties, the constraints imposed on investment, rural electrification, the type and availability of generating units and transmission lines, the need for consolidating the microgrids in the remote areas as a prerequisite for future interconnecting these regions via main power grids, and environmental emission. Also, how an optimal reliability and emission level can be achieved that will guarantee a continuous electrification with an optimum cost. All these obstacles made power systems planners and concerned agencies face tremendous difficulties in planning electric power facilities and making sound and appropriate policies in constructing new advanced power technologies such as microgrids in coordination with adding new generating units or reinforcing the transmission networks. The proposed policies can be established based on evaluating the performance of microgrid. This framework attempts to display the most tedious and prominent problems and challenges that face the electric power systems in smart grid and influence the decision-making process which must be based on four major factors, namely, rural electrification, environmental emission, reliability, and cost. The case studies illustrate the application of proposed model and policy in a coordinated power planning and a microgrid designing.

08:30-10:00 Session 9D: Game Theory in Security and Privacy

This session will explore the use of game theoretic modeling in addressing problems in security and privacy, focusing on computational optimization methods for designing security and privacy policies.

Location: Plumeria B
08:30
Game-Theoretic Models for Allocating Honeypots in Cybersecurity

ABSTRACT. Many decision-making problems in cybersecurity involve allocating limited resources to defend against skilled, motivated attackers. Recent research on security games has used computational game theory to address similar resource allocation problems in physical domains, such as infrastructure protection and environmental crime. I will describe models and algorithms we have developed that extend the ideas of security games into the domain of cybersecurity and highlight some of the new challenges we have encountered. I will focus particularly on game-theoretic models for optimizing the use of honeypots in network security. I will describe basic game-theoretic models for allocating limited honeypot resources to detect attackers, and then describe extensions of these models that consider more detailed models of attacker planning based on attack graphs. We show that these models can be solved for realistic computer networks using new algorithms, and that the game-theoretic models produce improved strategies for allocating honeypot resources against sophisticated attackers.

08:52
Games of Timing for Security
SPEAKER: Aron Laszka

ABSTRACT. Attackers of computing resources increasingly aim to keep security compromises hidden from defenders in order to extract more value over a longer period of time. As a result, defenders may have to mitigate potential targeted and non-targeted compromises, such as advanced persistent threats and recruitment into botnets, under a regime of incomplete information. Due to the defenders’ lack of information and the strategic nature of targeted attacks, the optimal timing of actions to audit, clean, or otherwise mitigate stealthy attacks poses a challenging problem. This timing problem is modeled most naturally as a security game between a defender and a strategic attacker, who have to choose when to mitigate and when to attack, respectively. We show how to find best-response strategies for both players, and characterize the equilibria of the game. Our work has practical implications for the definition of security policies, in particular, for password and key renewal schedules.

09:14
Adversarial Data Mining: A Game Theoretic Approach

ABSTRACT. Data mining techniques such as classification have wide applications in real-world problems. However traditional data mining techniques assume that the current data and the future data share the same properties. Unfortunately, this assumption no longer holds in an adversarial environment where data miner faces malicious active adversaries (e.g., spam filtering, intrusion detection, advance persistent threat detection etc.). Unlike in a static environment, the attack instances are controlled by malicious adversaries who actively transform the objects under their control to avoid detection. For example, in spam filtering, spammers insert “good” words to make the spam emails resemble the legitimate emails, and pass spam filters. Performance of traditional data mining techniques deteriorates quickly when facing such active adversaries. To address some of these challenges, we developed resilient and robust data mining techniques to distinguish the legitimate objects from the attack objects in an adversarial environment.

This presentation summarizes our previous work in adversarial data mining. Basically, we model the adversarial data mining applications as a two player Stackelberg game, where the data miner and the adversary make sequential moves, and each player aims to maximize his/her own utility. Based on a data mining algorithm’s equilibrium performance, we discuss how cost sensitive attribute selection could be done to create robust models. Furthermore, we discuss how game theory inspired ideas could be used to modify existing support vector machine learning techniques to create adversarial attack resistant classification models. Finally, we discuss the extension of our framework for dealing with multiple adversaries.

09:37
Optimizing Operational Decisions in Adversarial Classification

ABSTRACT. When learning, such as classification, is used in adversarial settings, such as intrusion detection, intelligent adversaries will attempt to evade the resulting policies. The literature on adversarial machine learning aims to develop learning algorithms which are robust to such adversarial evasion, but exhibits two significant limitations: a) failure to account for operational constraints and b) a restriction that decisions are deterministic. To overcome these limitations, we introduce a conceptual separation between learning, used to infer attacker preferences, and operational decisions, which account for adversarial evasion, enforce operational constraints, and naturally admit randomization. Our approach gives rise to an intractably large linear program. To overcome scalability limitations, we introduce a novel method for estimating a compact parity basis representation for the operational decision function. Additionally, we develop an iterative constraint generation approach which embeds adversary’s best response calculation, to arrive at a scalable algorithm for computing near-optimal randomized operational decisions. Extensive experiments demonstrate the efficacy of our approach.

08:30-10:00 Session 9E: Models for disease screening and treatment

Models for studying screening policies for breast cancer, screening behavior in colorectal cancer, and treatment prioritization for Hepatitis C

Location: Camellia A
08:30
Characterization of Colorectal Cancer Screening Choice and Behavior Trajectories

ABSTRACT. Simulation models are popular tools for predicting health outcomes and evaluating health policy interventions because they offer flexibility to capture complex population and disease dynamics. However, even perfect models of disease progression and screening accuracy are limited when simplifying assumptions are made about patient screening behavior and compliance. In cancer screening, these patient behavior patterns heavily impact outcomes and policy efficacy. We investigate this in the context of colorectal cancer screening which can be accomplished through multiple modalities, each with different recommended screening intervals. We use censored insurance claims data from Oregon to explore methods of modeling screening behavior trajectories and evaluate their impact on model outcomes.

09:00
Prioritizing Hepatitis C Treatment in U.S. Prisons
SPEAKER: Turgay Ayer

ABSTRACT. About one out of six inmates in the U.S. is infected with hepatitis C virus (HCV). High prevalence of HCV in prisons offers a unique opportunity to control the HCV epidemic. Newest HCV treatment drugs are very effective but also outrageously expensive, which precludes universal treatment in prisons. In this study, we propose a restless bandit modeling framework to support hepatitis C treatment prioritization decisions in U.S. prisons. We first prove indexability for our problem and derive several structural properties of the well-known Whittle's index. Then, we derive a closed-form expression of the Whittle's index for patients with advanced liver disease. From the interpretation of the closed-form expression, we anticipate that the performance of the Whittle's index would degrade as the treatment capacity increases; and to address this limitation, we propose a capacity-adjusted closed-form index policy. Using a detailed, realistic agent-based simulation model, we show that our proposed policy can significantly improve overall health outcomes compared with current practice and other benchmark policies. Our results also shed light on controversial issues in hepatitis C prioritization and have important policy implications: 1) considering remaining sentence length and injection drug use (IDU) status besides liver health state in prioritization decisions can lead to a significant performance improvement; 2) when linkage-to-care rate outside prison is small while treatment capacity in prison system is relatively large, patients with shorter remaining sentence lengths should be prioritized; and 3) for patients with advanced liver disease, IDUs should not be prioritized unless their reinfection is very-well controlled.

09:30
The role of supplemental screening and breast density information in breast cancer detection

ABSTRACT. Mammography screening is the gold standard for breast cancer screening in the US, but it is known to be less accurate for women with dense breasts. Supplemental screening methods such as Magnetic Resonance Imaging (MRI) and ultrasound have been recently introduced to improve detection accuracy. We incorporate the breast density information using a partially observable Markov decision process model, and study the impact of supplemental tests in detecting breast cancer.

08:30-10:00 Session 9F: Stochastic Optimization for Healthcare
Location: Camellia B
08:30
Adaptive Risk-based Array Pooling in Public Health Screening

ABSTRACT. Pooled (group) testing has seen many applications, especially in the context of public health screening, donated blood screening, and genetics. We consider a special form of pooled testing called “array-based testing,” which takes advantage of overlapping pools. We propose an adaptive and risk-based array pooling scheme that considers important test and population characteristics, including imperfect tests, dilution effect, and heterogeneity (in terms of different risk factors) in the population, and determine the structure of the optimal testing design and optimal assignment of the heterogeneous subjects to the pools. Our case study indicates that the proposed optimization-based model leads to a substantial improvement over current practices.

09:00
Transparent Markov Modeling for Medical Decision Making
SPEAKER: Gordon Hazen

ABSTRACT. Markov cohort models have become widespread as a tool for comparative effectiveness and cost-effectiveness in medicine. Although model transparency is a recognized goal in publication guidelines, complete model specification is uncommon in published applications. StoTree is a newly available Excel addin for finite-state continuous-time Markov modeling that allows the user to specify a model in a transparent decomposed form, under which model components can be separately formulated and linked. Decomposition of this type allows the potential re-use of model components in multiple applications. We review properties of StoTree, and illustrate its use in constructing a complex model of heart monitoring following cardiac transplant.

09:30
A step-wise increase information approach for scheduling appointments (WITHDRAWN)

ABSTRACT. Scheduling appointments is a complicated process with many aspects to consider. The arrival of patients changes dynamically over time, their treatment plans (service time) are affected by various types of uncertainty, for example, medical need, patient preferences. The inherent uncertainty of patient treatment needs affects the utilization of clinic resources which could cause long working shifts for providers or undesired under-utilization of resources such as operating rooms. In this work, an optimal static policy aiming to reduce the day-to-day long-run scheduling load is constructed, and a maximum likelihood principle is applied to define a dynamic scheduling policy. The performance of this approach is analyzed in a scheduling process for an elective surgical practice at Mayo Clinic.

08:30-10:00 Session 9G: Warm Starts and Tractable Reformulations
Location: Verbana
08:30
Multi Information Source Optimization and Warm Starts

ABSTRACT. In the multi-information source optimization problem we study complex optimization tasks arising for instance in engineering or the natural sciences, where our goal is to optimize a design under an expensive-to-evaluate black-box function. In order to assess the objective value of some design, we also have access to a variety of information sources that provide cheaper approximations of varying accuracy, e.g., numerical simulations. These information sources are subject to model discrepancy, i.e. their internal model deviates inherently from reality. Note that our notion of model discrepancy goes considerably beyond typical noise that is common in multi-fidelity optimization: in our scenario, information sources can be biased and are not required to form a hierarchy. We present a novel algorithm that is based on a rigorous mathematical treatment of the uncertainties arising from the model discrepancies. Its optimization decisions rely on a stringent Bayesian value of information analysis that trades off the predicted benefit and its cost, and in particular efficiently exploits correlations among information sources for a higher information gain per sample. We provide experimental results demonstrating that our method consistently outperforms other state-of-the-art techniques: it finds designs of considerably higher objective value and additionally inflicts less cost in the exploration process. We further show how these ideas can be leveraged to solve sequences of related Bayesian optimization problems considerably faster.

08:52
An exact optimization framework based on an improved problem reformulation for designing optimal districts in the recycling of electronic goods

ABSTRACT. The districting decision-making problem addressed in this work is motivated by a real-world case arising in the recollection and recycling of waste electric and electronic equipment (WEEE) in Europe. The WEEE Directive establishes that each company that sells electronic products in an European country has the obligation to recollect and recycle an amount of returned items proportional to the market share of the company. The decision making process involves assigning recollection units to companies subject to some planning requirements. In addition, the core of the law dictates that a regional monopoly by any company has to be avoided, that is, all basic areas allocated to the same corporation should be as geographically dispersed as possible. An integer programming modeling framework for this hard combinatorial optimization problem is presented. Several algorithmic strategies and model enhancements that effectively exploit the underlying mathematical structure of the problem are developed. Particularly, a dual bounding scheme and an improved problem reformulation based on a coverage location problem using a significantly fewer number of binary variables are proposed. An exact optimization algorithm based on the problem reformulation is developed. Empirical evidence over a wide set of problem instances illustrate the usefulness and positive impact of the proposed strategies resulting in dramatic speed-up of solution times when compared to existing approaches and commercial methods.

09:15
The (not so) Trivial Lifting in Two Dimensions

ABSTRACT. When generating cutting-planes for mixed-integer programs from multiple rows of the simplex tableau, the usual approach has been to relax the integrality of the non-basic variables, compute an intersection cut, then strengthen the cut coefficients corresponding to integral non-basic variables using the so-called trivial lifting procedure. Although of polynomial-time complexity in theory, this lifting procedure can be very expensive in practice. For the case of two-row relaxations, we present a practical algorithm that computes trivial lifting coefficients in constant time, for arbitrary maximal lattice-free sets. Computational experiments confirm that the algorithm works very well in practice.

09:38
Warmstarting of Primal-Dual Interior Point Methods (IPMs) for Mixed Integer Second Order Cone Optimization (MISOCO)

ABSTRACT. Warmstarting for IPMs is an active research topic for at least three decades. The need for efficient warmstart techniques significantly increased due to the recent development of developing conic and cylindrical cuts for MISOCO. In this talk, we review recent work, and introduce new concepts to warmstart IPMs for MISOCO in the novel branch and conic-cut framework.

10:00-10:30Coffee Break
10:30-11:30 Session 10: Proximal and Temporal Difference Methods: A Bridge Between Numerical Convex Analysis and Approximate Dynamic Programming

Plenary Talk by Prof. Dimitri Bertsekas

Abstract: We will review connections between two currently very active research fields, focusing primarily on large linear fixed point problems. We show that, under certain assumptions, there is a close relation between two types of methods: 1) Proximal algorithms, which are prominent in numerical convex analysis and optimization, and 2) Multistep methods of the temporal difference type such as TD(lambda), LSTD(lambda), and LSPE(lambda), which are central in simulation-based approximate dynamic programming, notable for its recent successes with programs that play games, such as Go, Backgammon, and others, at impressive and sometimes above human level. As an application of this connection,  we provide a new and simple way to accelerate the standard proximal algorithm by extrapolation towards the multistep iteration, which generically has a faster convergence rate. Moreover, we use the connection with multistep methods to integrate into the proximal algorithmic context several new ideas that have emerged in the approximate dynamic programming context. In particular, we consider algorithms that  project each proximal iterate onto the subspace spanned by a small number of  basis functions, using low-dimensional calculations and simulation, and we discuss various algorithmic options from approximate dynamic programming.

Location: Primrose C
13:00-14:00 Session 11: Next-generation Network Interdiction Algorithms

Tutorial by Prof. Cole Smith

Research interest and publications regarding network interdiction problems have expanded at a dizzying pace in recent years. This tutorial is designed to briefly introduce the field of interdiction problems and to illustrate some areas of future research based recent developments appearing in the literature and at this conference. We first cover classical algorithms, particularly those that rely on ad-hoc formulations, duality-based reformulations, and cutting-plane algorithms. Next, we focus on emerging algorithms, variations on interdiction problems, and new applications for interdiction. The algorithms we will discuss include new sampling techniques for interdiction and fortification, along with approaches that exploit special structures in interdiction problems. Some recent variations on interdiction problems include diversion problems where routes are forced onto a particular arc, and dynamic interdiction problems in which the attacker and defender repeat their moves sequentially. Finally, the tutorial will investigate new areas in which interdiction applications are being used, including logistics, cybersecurity, and critical infrastructure protection.

Location: Primrose C
14:00-15:30 Session 12A: Decision Analysis Applications in Healthcare
Chair:
Location: Lantana A
14:00
Decision Analysis Applications in Orthopaedics
SPEAKER: Karim Meijer

ABSTRACT. With the continued advances in the field of Orthopaedics, surgeons now have multiple options for treating many musculoskeletal system conditions. At the same time, there has been an increased focus on costs and risks in the health care system, and there is now more data available than ever on quantifiable financial cost for procedures and treatment outcome probabilities. This provides an ideal setting for the increased use of Decision Analysis tools to guide health care providers and patients in making decisions about their treatment plans. In this paper, we provide a compilation of work to develop decision making models for three conditions: adult mid-shaft clavicle (collarbone) fracture, osteoarthritis or other conditions necessitating knee replacement surgery, and Lisfranc injury (foot injury involving displacement of metatarsal bones). In each application, the objective is to minimize overall expected cost, including the possibility of secondary procedures, while also providing measures of the variance in expected costs and reporting the sensitivity to assumptions for both input costs and probabilities. The results of our analyses provide key insights into the treatment of these three conditions based upon available data, and also demonstrate the application of decision tree models that can be customized (e.g., to a particular health care facility and location) to facilitate decision making on a case-by-case basis.

14:30
An Inverse Optimization Approach to Estimating Lipid Management Guidelines’ Risk Value of a Life Year on Treatment

ABSTRACT. Type 2 diabetes is the leading cause of cardiovascular (CV) complications and the main objective of CV risk management has recently switched to reducing lipid abnormalities. Statins, widely regarded primary care of lipid management agents, help reduce cholesterol but may have adverse side effects including damages on liver and muscles. National lipid management guidelines aim to guide the complex statin initiation decisions but with no specific mention on the side effects of statins. We formulate the one-time statin initiation decision problem as a dynamic decision model to minimize the longitudinal risk of a first major CV event, based on which an inverse dynamic optimization model is developed to estimate the risk value of a life-year on treatment, i.e. the perceived side-effect of statins in terms of increase in the annual risk of a major CV event. The state space of our model includes the discretized total cholesterol and HDL ranges and an absorbing state, representing the first major CV event or a non-CV-related death. We utilize data from Mayo Clinic, UKPDS risk model, and CDC mortality rates to estimate the transition probabilities among states. Our analyses of guidelines from developed countries and regions reveal substantial variation in the elicited risk value of a life-year on treatment, which ranges within [0.07%, 0.23%] for males and [0.04%, 0.29%] for females. Based on the evidence that none of the guidelines are performing optimally, our computations indicate duration of treatment can be shortened by up to 4.43 years, without sacrificing from overall CV risk.

15:00
Challenges In Markov Modeling Of Cancer Treatment
SPEAKER: Jiaru Bai

ABSTRACT. We present a way to build a Markov decision tree to model cancer progression and cost-effectiveness analysis for two or more cancer treatments. We propose several problems researchers can encounter in this kind of research and provide possible solutions.

14:00-15:30 Session 12B: Power Flow Planning and Optimizations

Energy Session

Chairs:
Location: Lantana B
14:00
Incorporating Operational Flexibility in Generation Expansion Planning Through a Convex Relaxation of Unit Commitment
SPEAKER: Bowen Hua

ABSTRACT. Large shares of renewable generation in electric power systems increase the need for operational flexibility. Consideration of operational flexibility in generation expansion planning (GEP) necessitates the modeling of unit commitment (UC) in system operations. However, the UC problem itself is computationally challenging. We present a convex relaxation of the UC problem as a tractable approximation for system operations. In this relaxation, we explicitly describe for each individual generating unit the convex hull of its feasible set and the convex envelope of its cost function. We show the tightness of our relaxation and the impact of operational flexibility on GEP through a large-scale example based on the Texas system.

14:23
Parallelized Interior Point Method for Security Constrained Optimal Power Flow (SCOPF)
SPEAKER: Na Li

ABSTRACT. The security constrained optimal power flow problem (SCOPF) seeks an operating point for the electric power system to minimize costs while ensuring system safety in the event of contingencies. Solving SCOPF for real-time operations is challenging for large power systems due to the large size of the resulting optimization program. We consider the SCOPF problem on tree networks, such as distribution systems. In SCOPF, different contingencies are only coupled via the power injection control variables, yielding a sparse system matrix. We design a domain decomposition technique based on this sparsity to parallelize the problem across different contingencies. For each subproblem associated with a given contingency, we further exploit the network stucture through graph coloring techniques to enhance parallelism. In summary, we design our method to utilize two layers of parallelism: 1) across contingencies and 2) across buses in the network. The parallelization is effective due to the sparsity induced by the weak coupling among different contingencies, the tree network connectivity, and the physical laws of power systems.

14:45
Relaxations for Optimal Power Flows: Recent Progress and Computational Results

ABSTRACT. This presentation reviews progress in convex relaxations and bound tightenings for optimal power flows. Extensive computational results will be reported.

15:07
Minor Relaxation and SOCP Based Spatial Branch-and-Cut Method for the Optimal Power Flow Problem
SPEAKER: Burak Kocuk

ABSTRACT. We consider the Optimal Power Flow (OPF) Problem modeled as a semidefinite program (SDP) with additional minor restrictions. A thorough analysis of the properties of the minors and submatrices of the matrix variable used in the formulation leads us to a second-order cone programming (SOCP) relaxation of the OPF problem. By making use of global optimization techniques including cutting planes, convex envelopes and bound tightening methods specialized for this problem, we aim to solve difficult instances from the NESTA archive. Our approach efficiently proves stronger dual bounds than the ones obtained from SDP relaxation based approaches in the literature. We also propose an SOCP based spatial branch-and-cut method to globally solve the most challenging instances, for which our algorithm is able to prove 0.79% optimality gap in only 323 seconds on the average. Overall, our methodology provides computationally tractable approach to prove strong dual bounds for the OPF problem.

14:00-15:30 Session 12C: Health Applications of Pooling, Batching and Resource Decisions

Health Applications of Pooling, Batching and Resource Decisions

Location: Plumeria A
14:00
Physiology-Based Anticipative ICU Management

ABSTRACT. Around 20% of hospital operating costs are due to ICUs, and this percentage has been increasing. The efficient operation of ICUs is crucial to providing high quality of care while controlling costs. In this paper, we consider the transfer operations of patients to a downstream unit. In current practice, downstream beds are requested once a patient is clinically ready to be transferred. We investigate anticipative bed requests that can be made before a patient is ready for transfer. Patient health is described via a novel transfer readiness score that we incorporate into an infinite horizon Markov decision process model, which we solve through an approximation algorithm. We establish the existence of a threshold-type request policy for a single patient version of the problem. Our numerical results indicate that an anticipative transfer request policy can significantly improve the system performance, e.g., a 10-bed unit under our proposed policy performs as well as a 12-bed unit under current practice, and mean transfer delay can be reduced by more than 50%. We further investigate the sensitivity of policy change upon cost parameter estimation errors by using robust models, and demonstrate that proactive strategies are more beneficial than reactive current policy in most of the scenarios.

14:23
Optimal Pooling Strategies for Nucleic Acid Testing of Donated Blood Considering Viral Load Growth Curves and Donor Characteristics
SPEAKER: Hadi El Amine

ABSTRACT. Blood product safety, in terms of being free of transfusion-transmittable infections (TTIs), is crucial. Nucleic Acid Testing (NAT) technology enables earlier detection of infections but is more expensive, hence, most blood centers administer NAT to pools of blood samples from multiple donors. Since some donor characteristics are uncertain, we develop a chance-constrained model that determines the optimal NAT pool sizes for various TTIs, considering both non-universal (where first-time donors undergo more extensive screening), and universal (i.e., common testing for all donors) strategies, so as to minimize the TTI risk, while remaining within the testing budget with a high probability.

14:45
Optimizing the Timing and Number of Batches for Compounded Sterile Product Production in an In-Hospital Pharmacy
SPEAKER: Vera Tilson

ABSTRACT. Hospital pharmacy departments traditionally batch the production of Compounded Sterile Products (CSP). The batches are intended to satisfy several hours of demand, and the cancellation of orders by physicians can lead to a considerable medication waste. In making the decision on the size of batches pharmacy mangers trade off the “holding cost” of carrying inventory, which in this context is largely the cost of wasted doses, and the “set up cost” of the labor for delivering the prepared medication to the various units of a hospital. Although this trade-off superficially resembles that of a classic batching problem, it turns out to be quite different. This is primarily because the sequence of batches must repeat in a 24-hour cycle and the holding cost, related to the waste of CSP medications, varies over the 24-hour cycle according to the pattern of order cancellations, which in turn depends on the schedule of physicians’ reevaluating the treatments of patients under their care. In addition, the setup cost, related to the cost of the delivery, varies slightly with the cost of labor at different times of the day. The contribution of this work is twofold. Firstly, it extends the considerable literature on lot sizing by introducing a formulation for deciding the optimal number and timing of batches as an integer programming problem that caters for, and minimizes, the novel forms of holding and setups costs described above. Secondly, we propose a computationally efficient dynamic programming methodology for the batch scheduling optimization problem.

15:07
Optimal Decision Making in the Processing of Human Breast Milk
SPEAKER: Ruichen Sun

ABSTRACT. Donated breast milk – collected, processed and dispensed via milk banks – is the standard of care for premature and unhealthy infants whose mothers cannot provide adequate supply. We formulate a multi-criteria mixed-integer program to optimize the daily decisions involved in the pooling of milk from different donors to meet macronutrient requirements across different product types, and the batching of pooled milk for efficient pasteurization. Our numerical results demonstrate significant improvements compared to historical decisions at our partner milk bank.

14:00-15:30 Session 12D: Exploiting structure in integer programs

   

Location: Plumeria B
14:00
Models and Algorithms for Maximum Proportional Flow Problems with Semicontinuous Restrictions

ABSTRACT. We consider a variation of the multi-source, multi-sink maximum flow problem in which flow must emanate from the source nodes according to a prescribed rate, while flow arrives to the sink nodes at another given rate. Additionally, we restrict flow variables to be semicontinuous, in which the flow must either be 0 or no less than some lower bound. We call this problem the semicontinuous maximum proportional flow problem (SC-MPFP) since the amount of outgoing flow must leave the source nodes and arrive at the sink nodes according to a given proportional pattern. We consider simultaneous and non-simultaneous flow settings for the SC-MPFP and examine semicontinuous restrictions on network flow paths and patterns. We first introduce a compact formulation for the SC-MPFP having simultaneous flows. To solve the SC-MPFP, we decompose the formulation and employ a Branch-and-Price algorithm. For semicontinuous restrictions on paths, the pricing subproblem can be solved as a shortest path instance, while semicontinuous restrictions on patterns yield a minimum-cost flow subproblem. The associated branching mechanism maintains the special structure of each subproblem to ensure an efficient solution technique. We lastly provide a compact formulation for the SC-MPFP with restrictions on flow paths and patterns in non-simultaneous flow settings.

14:30
Solving The Undirected Network-Diversion Problem in Polynomial Time
SPEAKER: Kevin Wood

ABSTRACT. A network-diversion problem (ND) on graph G seeks a minimum-weight s-t cut that contains a prespecified “diversion edge." ND has military and intelligence-gathering applications. ND on an directed graph (DND) has been known to be NP-complete, and we show that the undirected version of the problem (UND) is also NP-complete.  To solve UND, we tighten a standard integer-programming formulation for DND that seeks an s-t path that intersects an s-t cut only at the diversion edge: the new formulation replaces the path with a special spanning tree that is identified by solving a matroid-intersection problem.

15:00
Decomposition of loosely coupled integer programs: a multiobjective perspective (WITHDRAWN)

ABSTRACT. We consider integer programming (IP) problems consisting of (possibly a large number of) interrelated subsystems and a small number of coupling constraints that link variables from different subsystems. We call such problems "loosely coupled". Motivated by recent developments in multiobjective programming (MOP), we develop a MOP-based decomposition algorithm to solve loosely coupled IPs. More specifically, we reformulate the problem in such a way that it can be decomposed into a (resource-directive) master problem and a set of MOP subproblems. The proposed algorithm iteratively generates columns for the master problem. However, unlike traditional column generation methods, the master problem is an IP and considers a differently structured (and usually smaller) set of columns. Columns are added to the master problem IP until its solution provides an optimal solution to the original problem. One advantage of the approach is that solution of the master problem and subproblems can be done with standard IP solvers, exploiting the sophisticated techniques they embed; there is no need for a tailored branch-and-bound method to be implemented. We provide preliminary computational results, demonstrating the potential benefits of our approach.

14:00-15:30 Session 12E: Network Architecture Design
Location: Camellia A
14:00
Cyber Defense Based on Network Structure
SPEAKER: Murat Karatas

ABSTRACT. Infrastructures, such as university nuclear reactors, are controlled through cyber-physical systems. Recent attacks on government offices and foreign consulates showed that assessing the vulnerability of these systems is key in directing defensive investment. We present an MDP to compute an optimal attack policy. The MDP has an exponential number of states, and is based on tracking the set of available attacks for each link in the network. We show that it is possible to compute values for each MDP state, and optimal attack policies using s-t reliability. We also show that finding an optimal defensive investment strategy is a network interdiction model. Such a policy can be calculated using a direct and a simulation approach. The example we use includes a company network with 20 nodes and 25 edges.

14:30
Fast Design of Wireless Mesh Networks to Defend Against Worst-Case Jamming
SPEAKER: Paul Nicholas

ABSTRACT. Wireless mesh networks (WMNs) are interconnected systems of wireless access points (APs) that provide untethered network connectivity for a group of users who require data, voice, and/or video communication. The wireless access medium of a WMN makes it vulnerable to electromagnetic attack and interference. We apply the game theoretic defender-attacker-defender (DAD) optimization modeling technique to the design, attack, and operation of a WMN. We present a sampling-based algorithm and associated decision-support tool that can quickly prescribe WMN topologies to minimize the worst possible disruption an adversary can inflict through deliberate interference, i.e., jamming. Our model considers radio operating characteristics, the relative importance of client coverage and network throughput, and the effects of radio propagation over terrain. To our knowledge, we are the first to apply the DAD framework to the problem of WMN topology design.

15:00
Optimal Route Planning for Airborne Sensors (WITHDRAWN)
SPEAKER: Jose Walteros

ABSTRACT. We tackle a variation of the close-enough traveling salesman problem in which the salesman is accounted of visiting a node if he traverses a predefined distance through a circular area surrounding the node. This variation arises in the context of unmanned aerial vehicle routing, where a vehicle must cross an area collecting information form a set of targets, while minimizing the detection risks. We consider two approaches for modeling the tradeoff between the amount of collected information and the observed risk and test them by solving a collection of instances adapted from the literature.

14:00-15:30 Session 12F: Distributionally Robust Stochastic Programming
Location: Camellia B
14:00
Data-Driven Risk-Averse Two-Stage Stochastic Program with Zeta-Structure Probability Metrics
SPEAKER: Yongpei Guan

ABSTRACT. The traditional two-stage stochastic programming approach assumes the distribution of the random parameter in a problem is known. In most practices, however, the distribution is actually unknown. Instead, only a series of historic data are available. In this paper, we develop a data-driven stochastic optimization approach to providing a risk-averse decision making under uncertainty. In our approach, starting from a given set of historical data, we first construct a confidence set for the unknown probability distribution utilizing a family of Zeta-structure probability metrics. Then, we describe the reference distributions and solution approaches to solving the developed two-stage risk-averse stochastic program, corresponding to the given set of historical data, for the cases in which the true probability distributions are discrete and continuous, respectively. Furthermore, we prove that, for both cases, the risk-averse problem converges to the risk-neutral one as more data samples are observed. Finally, the experiment results on newsvendor and facility location problems show how numerically the optimal objective value of the risk-averse stochastic program converges to the risk-neutral one, which indicates the value of data.

14:22
Two-Stage Distributionally Robust Linear Programming over Wasserstein Balls

ABSTRACT. Adaptive robust optimization problems are usually solved approximately by restricting the adaptive decisions to simple parametric decision rules. However, the corresponding approximations error can be substantial. In this paper we show that two-stage robust and distributionally robust linear programs can often be reformulated exactly as conic programs that scale polynomially with the problem dimensions. Specifically, when the ambiguity set constitutes a 2-Wasserstein ball centered at a discrete distribution, then the distributionally robust linear program is equivalent to a copositive program (if the problem has complete recourse) or can be approximated arbitrarily closely by a sequence of copositive programs (if the problem has sufficiently expensive recourse). These results directly extend to the classical robust setting and motivate strong tractable approximations of two-stage problems based on semidefinite approximations of the copositive cone. We also demonstrate that the two-stage distributionally robust optimization problem is equivalent to a tractable linear program when the ambiguity set constitutes a 1-Wasserstein ball centered at a discrete distribution and there are no support constraints.

14:44
On deterministic reformulations of distributionally robust joint chance constrained optimization problems
SPEAKER: Shabbir Ahmed

ABSTRACT. A joint chance constrained optimization problem involves multiple uncertain constraints, i.e., constraints with stochastic parameters, that are jointly required to be satisfied with probability exceeding a prespecified threshold. In a distributionally robust joint chance constrained optimization problem (DRCCP), the joint chance constraint is required to hold for all probability distributions of the stochastic parameters from a given ambiguity set. In this work, we consider DRCCP involving convex nonlinear uncertain constraints and an ambiguity set specified by convex moment constraints. We investigate deterministic formulations of such problems and conditions under which such formulations are convex. In particular we show that a DRCCP can be formulated as a deterministic convex program if one of the following conditions hold: (i) there is a single uncertain constraint, (ii) the ambiguity set is defined by a single moment constraint, (iii) the ambiguity set is defined by linear moment constraints, and (iv) the uncertain and moment constraints are positively homogenous with respect to the uncertain parameters. We further show that if the decision variables are binary and the uncertain constraints are linear then a DRCCP can be formulated as a deterministic mixed integer convex program.

15:07
Distributionally Robust Newsvendor Problem with Variation Distance

ABSTRACT. We study a general class of newsvendor models where the underlying demand distribution is unknown but continuous. We assume the underlying distribution lies in an ambiguity set of distributions and minimize the worst-case expected cost with respect to this set. We form the ambiguity set by considering distributions whose variation distance to a nominal distribution is within a prescribed value. We characterize the optimal solution to this problem at different levels of robustness, relating it to the risk-neutral solution. We show the optimal solution is monotone in the level of robustness, and it stabilizes at a critical order quantity when the decision maker’s level of risk is higher than a critical level of robustness. Consequently, some costs also stabilize after a critical level of robustness. We investigate which demand levels are more important than others. We show that the set of critical demands is monotonically shrinking and converges to a set of measure zero as the robustness level increases. Numerical results show that the optimal decision of the risk-neutral model plays an important role to characterize the most important demand regions. Finally, we end with insights gained from our analysis.

14:00-15:30 Session 12G: Data-driven Medical Optimization
Location: Verbana
14:00
Unsupervised Bayesian Hierarchical Methods For Medical Fraud Assessment
SPEAKER: Tahir Ekin

ABSTRACT. U.S. governmental agencies report that three to ten percent of the annual health care spending is lost to fraud, waste and abuse. These fraudulent transactions have direct cost implications to the tax-payers and diminish the quality of the medical services. This talk discusses the use of unsupervised data mining approaches as pre-screening tools to aid in medical fraud assessment. They can help identify the hidden patterns among providers and medical procedures via outlier detection and similarity assessment. The focus will be on Bayesian hierarchical methods, and their use and potential insights will be presented using U.S. Medicare Part B data.

14:22
OSPEDALE, An Agent-based Simulation System for Child Welfare Analytics
SPEAKER: Fred Wulcyzn

ABSTRACT. There are roughly 3 million reports of child abuse and neglect each year. The cost of caring for children who are victims approaches $28 billion annually, a figure that does not include the cost of health care, education, or behavioral health care designed to address the sequelae of maltreatment. Foster care is one service offered to children when parents are not able to raise children safely. As a form of care, placement away from parents is intrusive, expensive, and of questionable clinical utility. As a consequence states are always trying to manage how much foster care they use. Not surprisingly, the system used to manage foster care is a complicated one. In particular, it is hard to say how efforts to regulate pay off in terms of better outcomes for children and lower costs for taxpayers - a perennial question when government tries to solve social problems. OSPEDALE is an agent based simulation model designed to help policy-makers make better decisions. In this presentation, we describe OSPEDALE, its origins, and its intended use. As a simulation model, OSPEDALE is unique in that we have extensive real world data with which to parameterize the underlying dynamics. To highlight the utility of the model we describe how the model is used to depict the interplay between micro and macro forces within the system and the resulting system behavior. We then illustrate how policy solutions are vetted using the simulation engine to test whether policies will have their intended impact.

14:44
Robust Maximum Likelihood Estimation with Application to Radiation Therapy
SPEAKER: Omid Nohadani

ABSTRACT. In many applications, statistical estimators serve to derive conclusions from data, most prominently in finance, medical decision-making and clinical trials. However, the conclusions are typically dependent on uncertainties in the data. We use robust optimization principles to construct robust maximum likelihood estimators that are immune against data errors. Both error types are considered: a) adversarial type, modeled using the notion of uncertainty sets, and b) probabilistic type, modeled by distributions. We provide efficient local and global search algorithms to compute the robust estimators and discuss them in details for the case of multivariate normally distributed data. The estimator performance is demonstrated on two datasets. First, using computer simulations, we demonstrate that the proposed estimators are robust against both types of data uncertainty and provide significantly more accurate estimates, compared to classical estimators which degrade significantly, when errors are encountered. We establish a range of uncertainty sizes, for which robust estimators are superior. Second, we analyze deviations in cancer radiation therapy planning. Uncertainties amongst plans are caused by patients’ individual anatomies and the trial-and-error nature of the process. Robust estimators prove to result in more reliable decisions when applied to a large set of past treatments. (joint work with D. Bertsimas)

15:07
High accuracy body part learning from 3D depth images

ABSTRACT. Learning body parts, body shape, joint locations, and more generally anthropometric measures from visual data is an important task in computer vision and machine learning as it forms the basis for many human-computer interfaces. Usually these learning tasks are performed either on 2D RGB images and/or 3D depth images and there is a plethora of algorithms capable of performing this task. These algorithms are optimized for real-time performance, motion tracking, and stability across a huge variety of body poses and clothing and in consequence often trade-off estimation accuracy. In healthcare and medical diagnostics applications however the requirements are very different. Measurement and estimation accuracy trumps real-time speed and motion tracking requirements. Moreover, the patient is often directed into a specific pose so that pose invariance is secondary as well. As such the performance trade-off of many of the aforementioned algorithms is suboptimal for medical diagnostics. We present a two-pass machine learning approach to identify body parts and estimate anthropometric measures with very high accuracy. We present two variants, one based on random forests and another based on deep learning. Training is performed with the diagnostics applications in mind via a customized CGI pipeline, where a large variety of synthetic human body models is generated simulating anthropometric feature progression (e.g., various stages of a medical condition) and final validation is performed on real-world data. We will also discuss applications of the obtained algorithm in ongoing real-world projects.

15:30-16:00Coffee Break
16:00-17:30 Session 13A: Control, Optimization, and Computations for Energy Systems

Energy Session

Location: Lantana A
16:00
Construction of inner feasible approximations for AC power flow

ABSTRACT. The AC power flow equations are central to most models of the power system and constitute a fundamental constraint that needs to be satisfied in power systems operations. The nonlinear and nonconvex nature of these equations creates significant difficulties for optimization modeling, particularly in the presence of uncertainty (for example, in the face of increasing penetration of renewable energy resources in the grid). There has been much interest in convex relaxations to compute lower bounds on optimization problems . However, the limitation of these techniques is that they may fail to find a feasible solution. We present systematic techniques to compute inner approximations of the feasible set of the AC power flow problem, ie, regions in injection space that are guaranteed to have a feasible power flow solution satisfying the operational constraints of the power system. The resulting regions can be used fruitfully for various applications: Computing corrective actions in the face of contingencies quickly, computing robust-feasible setpoints for generators etc.

16:30
Two-Stage Stochastic Programs for Integration of Server Provisioning and Power Procurement for Energy-Efficient Data Centers
SPEAKER: Soongeol Kwon

ABSTRACT. This study is motivated by pressing issues on a tremendous increase of energy consumption and carbon pollution in energy-intensive industries, typically data centers. In general, data centers have an issue of low utilization of servers against time-varying and heterogeneous workloads that causes unnecessary energy cost and pollution. Therefore, energy consumption can be significantly reduced by implementing server provisioning techniques that adapt the number of active servers proportionally to workloads while providing quality of service guarantees. In addition, there exist an opportunity to reduce energy cost by utilizing renewable energy to meet power demand and adopting demand response that adjust energy consumption to avoid peak electricity price and use more renewable energy. For the aforementioned opportunities, the main objective of this study is to suggest a new decision-making methodology tailored to determine server provisioning and power procurement in an integrated fashion for energy-efficient operations of data centers with incorporation of renewable energy and demand response. Specifically, we propose a two-stage stochastic programming problem and solution approaches to obtain reliable and efficient solutions in presence of uncertainty and stochasticity represented using scenarios. To the best of our knowledge, our proposed methodology that integrates server provisioning and power procurement with renewable energy and demand response based on a two-stage stochastic programs has not been introduced in the literature.

17:00
Data Centers as Dispatchable Loads to Harness Stranded Power
SPEAKER: Kibaek Kim

ABSTRACT. We analyze how traditional data center placement and optimal placement of dispatchable data centers affect power grid efficiency. We use detailed network models, stochastic optimization formulations, and diverse renewable generation scenarios to perform our analysis. Our results reveal that significant spillage and stranded power will persist in power grids as wind power levels are increased. A counter-intuitive finding is that collocating data centers with inflexible loads next to wind farms has limited impacts on renewable portfolio standard (RPS) goals because it provides limited system-level flexibility. Such an approach can, in fact, increase stranded power and fossil-fueled generation. In contrast, optimally placing data centers that are dispatchable provides system-wide flexibility, reduces stranded power, and improves efficiency. In short, optimally placed dispatchable computing loads can enable better scaling to high RPS. In our case study we find that these dispatchable computing loads are powered to 6080% of their requested capacity, indicating that there are significant economic incentives provided by stranded power.

16:00-17:30 Session 13B: Advances in Integer Programming

In this session we explore theoretical and practical advances involving Integer Programming. These advances include a novel algorithm for Distributed Integer Programming, Approximation guarantees for Gomory's corner polyhedron, and an IP-based cutting-plane approach for approximating sparse polynomial optimization problems.

Location: Lantana B
16:00
Auction Algorithms for Distributed Integer Programming Problems with a Coupling Knapsack Constraint

ABSTRACT. Distributed optimization is an active research area with the involvement of multiple processors to solve optimization problems. The motivation for distributed optimization may either be big data that is difficult to store or process on a single machine, or independent agents that do not want to fully cooperate and share their data. Currently, most of the discrete optimization algorithms that are parallelizable require a central processor to coordinate the solutions. We propose algorithms to break this master-slave relationship between the processors and assign equal roles to every processor. The auction algorithm that we present addresses solving integer programming problems coupled with a knapsack constraint. We prove optimality of our auction algorithm for problems with a concavity property, and give examples of problems that have this concavity property. Furthermore, we give error bounds of the auction algorithm when applied to problems that don’t have the concavity property but have a concave approximation function and we give example concave approximations.

16:30
Approximation guarantees for intersection cut closures
SPEAKER: Joseph Paat

ABSTRACT. Intersection cuts are derived from lattice-free sets and form valid inequalities for Gomory's corner polyhedron. Here we develop conditions for a family of intersection cuts to closely approximate the corner polyhedron. In particular, we characterise "strong" approximations based upon the number of facets of the underlying lattice-free sets. This work was done in collaboration with Gennadiy Averkov from the University of Magdeburg in Germany, and Amitabh Basu from Johns Hopkins University in the USA.

17:00
A cutting-plane approach to approximating polynomial optimization problems with small tree-width

ABSTRACT. In the past, we have presented a class of linear programming approximations for mixed-integer polynomial optimization problems that take advantage of structured sparsity in the constraints. We showed that if the intersection graph of the constraints (obtained by creating a node per variable and an edge whenever two variables appear in a common constraint) has tree-width bounded by a constant, then for any arbitrary tolerance there is a linear programming formulation of polynomial size that approximates the problem. While useful from a theoretical standpoint, this formulation does not provide an effective way to approximate real polynomial optimization instances, as the result relies on building the formulation completely and from scratch. In this talk we will explore an IP-based cutting-plane approach, inspired on a re-interpretation of the same LP approximation, that can construct an equivalent formulation while providing bounds on-the-fly. This cutting-plane method also adds more flexibility, by allowing other approximations/relaxations to be combined, as well as heuristics.

16:00-17:30 Session 13C: Robust and Resilient Networks
Location: Plumeria A
16:00
Computational Aspects of Resilience-Driven Restoration of Interdependent Networks

ABSTRACT. Critical infrastructure networks such as electric power, water distribution, natural gas, transportation and telecommunications provide essential services to both society and to other infrastructure that rely on their performance. As such, they are highly vulnerable to a disruptive event (e.g., common cause failure, malevolent events, natural disasters) which makes their restoration a more challenging task for decision makers. In this work, we propose a solution approach for a multi-objective mixed-integer restoration model that aims to maximize the resilience of the interdependent infrastructure networks while minimizing the total cost associated with the restoration process. The restoration model considers the availability of limited time and resources and provides a prioritized list of components, nodes or links, to be restored along with assigning and scheduling them to the available work crews.

16:22
Influences of Topological Properties on The Robustness of Power Law Networks Under Spatially Correlated Failures
SPEAKER: Seth Guikema

ABSTRACT. Wide-scale natural disasters, such as hurricanes, earthquake, and wild fires can

inflict tremendous damage on large, distributed infrastructure networks. Accurately

and quickly predicting how these networks may respond to spatially correlated

threats is a crucial step in service restoration. Topological properties offer a

convenient heuristic for measuring properties of the altered network, but it is not

clear if and how these properties translate into network robustness. In order to

investigate this influence, we subject 2,000 randomly generated power-law

networks to spatially correlated failures of different intensities. Using this as the

response, we show how topological properties and properties of the threat can a

priori predict the robustness of these networks. This work builds from previous work

on network robustness under randomly generated failures by introducing more

realistic spatial correlation in the failure events.

16:45
Enumeration Algorithms for Infrastructure Resilience Analysis

ABSTRACT. We propose a functional definition of infrastructure resilience based on parametric analysis of two-stage (attacker-defender) and three-stage (defender-attacker-defender) models that require enumeration of a potentially enormous number of optimization problems. We present computational techniques that use bounding arguments to significantly limit the enumeration while still providing useful measures of infrastructure resilience and support the use of faster heuristic algorithms for the most difficult of these problems.

17:08
Optimal Resilient Grid Design: Distribution and Transmission systems

ABSTRACT. "Modern society is critically dependent on the services provided by energy infrastructure networks. When natural disasters (e.g. Hurricane Sandy) occur, depending on the severity of damage, the ability of these networks (both distribution and transmission systems) to provide service is often degraded because of physical damage to the network components. However, well-placed upgrades to these grids can greatly improve post-event network performance. Hence, we pose the optimal electrical grid resilient design problem as a two-stage, stochastic mixed-integer program with damage scenarios and propose decomposition-based algorithms to solve medium-sized networks.

In the case of distribution grids, to preserve computational tractability, we adopted multi-commodity-based flow model to approximate the 3-phase unbalanced AC power flow equations. We further tighten this approximation using the linearized distribution flow based methods developed by Gan & Low (2014). In both these cases, we further tighten the relaxation of the model by incorporating infeasibility cuts using a well-known power flow simulator, GridLAB-D. Our numerical experiments suggest that unless the network impedance profile is drastically modified, our algorithms remain tractable and is a good approximation of the post-disaster voltage profile.

In context of transmission grids, we use the convex quadratic relaxations of AC power flow physics for loopy networks with additional resiliency options like FACTS devices. Under the option of these transmission-level control devices, we show that the stability criteria related to phase angle differences is a driving force in a drastic reduction of upgrade costs."

16:00-17:30 Session 13D: New Developments in Optimization Software
Location: Plumeria B
16:00
OSPEDALE - Agent-based Social Simulation in the Service of Child Welfare Issues
SPEAKER: Fred Wulcyzn

ABSTRACT. There are roughly 3 million reports of child abuse and neglect each year. The cost of caring for children who are victims approaches $28 billion annually, a figure that does not include the cost of health care, education, or behavioral health care designed to address the sequelae of maltreatment. Foster care is one service offered to children when parents are not able to raise children safely. As a form of care, placement away from parents is intrusive, expensive, and of questionable clinical utility. As a consequence states are always trying to manage how much foster care they use. Not surprisingly, the system used to manage foster care is a complicated one. In particular, it is hard to say how efforts to regulate pay off in terms of better outcomes for children and lower costs for taxpayers - a perennial question when government tries to solve social problems. OSPEDALE* is an agent based simulation model designed to help policy-makers make better decisions. In this presentation, we describe OSPEDALE, its origins, and its intended use. As a simulation model, OSPEDALE is unique in that we have extensive real world data with which to parameterize the underlying dynamics. To highlight the utility of the model we describe how the model is used to depict the interplay between micro and macro forces within the system and the resulting system behavior. We then illustrate how policy solutions are vetted using the simulation engine to test whether policies will have their intended impact.

16:22
Leveraging Model Composition in Pyomo
SPEAKER: William Hart

ABSTRACT. Complex, real-world optimization problems require the use of modeling languages to simplify the process of describing and analyzing optimization problems. Modeling languages simplify the expression of common structural features in models (e.g. objectives and constraints), and they automate the expression of low-level model representations that are used by solvers. A variety of recent modeling languages have also employed object-oriented design to reflect model structure and to associate solvers with specific types of models.

This talk describes the object-oriented design used in Pyomo, which leverages model composition to facilitate the expression of new classes of problems. Object composition is a design strategy for combining simple objects into more complex ones. In Pyomo, model objects contain component objects, and this relationship implicitly defines the model type. Composition has widely recognized advantages over inheritance, an alternative object-oriented design strategy. For example, it is difficult to employ inheritance to model problems that must be addressed with a combination of model features (e.g., bilevel programs with equilibrium conditions), because inheritance quickly leads to a proliferation of model classes that represent these combinations.

Pyomo’s use of composition is key to its extensibility. New types of models can be defined with modeling components defined by users, thus avoiding modifications to Pyomo’s core library. Models can be easily transformed into other forms with few restrictions, which facilitates the development of custom optimization workflows. Finally, hierarchical model representations can be rapidly constructed in a modular manner using reusable blocks.

16:44
OAR Lib: An Open Source Arc Routing Library
SPEAKER: Oliver Lum

ABSTRACT. We present an open source, arc routing Java library that has a flexible graph architecture with solvers for several uncapacitated arc routing problems and the ability to dynamically generate and visualize real-world street networks. The library will be hosted at https://github.com/Olibear/ArcRoutingLibrary. We describe the algorithms in the library, report computational performance, and discuss implementation issues.

17:07
Extensions to the Dynamic Optimization Modeling Tool pyomo.dae for Simulation and Model Initialization

ABSTRACT. Dynamic optimization problems include differential equations as constraints. The main challenge with these problems is dealing with the continuous dynamics introduced by the differential equations since most optimization solvers cannot handle them directly. Two strategies for solving dynamic optimization problems are multiple shooting and full discretization. Multiple shooting approaches rely on DAE solvers to repeatedly simulate the continuous dynamics over several time periods. Full discretization approaches approximate the continuous dynamics using algebraic equations and rely on solving a large-scale nonlinear programming problem. Solving this large problem depends on providing a good starting guess to the optimization solver. This can be done by simulating the dynamic model. Therefore, access to a DAE solver is valuable regardless of the dynamic optimization solution approach.

Our previous work introduced pyomo.dae as a tool for modeling and automatically discretizing dynamic optimization problems using the open-source modeling language Pyomo. This talk discusses some recent extensions to pyomo.dae for model simulation and initialization. The simulator feature takes a dynamic model formulated with pyomo.dae and solves the system of ODEs or DAEs. We provide interfaces to a wide variety of ODE and DAE solvers via Python packages such as Scipy and Assimulo. Furthermore, the simulator can be used to initialize discretized dynamic optimization problems in order to provide a good starting guess for optimization solvers. In this talk we will demonstrate these new features on a few sample problems. Finally, we will discuss future pyomo.dae developments enabled by these new extensions.

16:00-17:30 Session 13E: Simulation and Sampling for Stochastic Programs
Location: Camellia A
16:00
Nested Augmented Simulation For Stochastic Optimization
SPEAKER: Tahir Ekin

ABSTRACT. We propose a novel simulation-based approach to solve stochastic programs with recourse under decision dependent uncertainty. We transform the stochastic optimization problem into an omnibus simulation of an augmented probability model where the decision variable is also treated as stochastic. We then develop a nested sampling algorithm to simulate the decision variable and uncertain parameters and obtain the optimal decision via collapsing of the samples to the mode of the augmented model. We illustrate our methodology on a newsvendor problem with stock-dependent uncertain demand for single and multi-item (news-stand) cases. We provide performance comparisons with MCMC-based augmented probability simulation and the traditional Monte Carlo simulation schemes.

16:30
Exact and Heuristic Approaches to the Vehicle Ferry Revenue Management Problem

ABSTRACT. In this paper we introduce the vehicle ferry pricing problem. Vehicle ferries typically transport a mix of vehicle types and the binding constraint is the capacity of the vehicle deck. The number of vehicles a ferry can transport depends on the quality of the packing solution. Here, we assume that passengers arrive into the booking system during a selling season that begins up to 6 months before departure. Purchase decisions are made based on the offered price and time of purchase. We use dynamic programming to set the prices charged through the selling season for each vehicle type, based on the remaining capacity of the ferry. The packing and pricing problem can be solved to optimality for small instances using an integer programming formulation to solve the packing problem. A simulation-based heuristic has been developed for more complex ferries, which uses a vehicle ferry loading simulator to map each vehicle mix to a remaining-space state. This reduces the state space of the dynamic program, enabling it to be solved rapidly. In this talk, we will describe the exact solution and the heuristic and demonstrate their performance over a range of instances.

17:00
Variance Reduction for Sequential Sampling in Stochastic Programming
SPEAKER: Jangho Park

ABSTRACT. We investigate the use of variance reduction techniques Antithetic Variates (AV) and Latin Hypercube Sampling (LHS) for sequential sampling in stochastic programming. We assume a sequence of solutions is given and employ AV or LHS to assess the quality of these solutions by sequentially increasing the sample size. The sequence of solutions can be obtained by any method satisfying some convergence properties. Therefore, the sequential framework can be used by a variety of methods to find high-quality solutions with a desired probability. We present rules to stop sampling and to increase sample sizes. We update the theory, especially for LHS, and analyze asymptotic variances. We present computational experiments on a number of test problems. Experiments indicate that the use of AV and LHS leads to tighter confidence intervals on the quality of solutions obtained.

17:29
A computational approach for multi-period stochastic programs with stationary processes (WITHDRAWN)

ABSTRACT. We present a rolling-horizon approach to solve multi-period stochastic programs where, at each time period, two levels of decisions are made (a) planning, and (b) recourse. These decisions are made in an environment governed by an underlying stationary stochastic process. The problem at each time period is solved using the two-stage stochastic decomposition (2-SD) algorithm. We will present novel techniques to warm-start this sequential sampling algorithm which further enhances its computational capability. Results from experiments conducted on a power systems application instances will also be discussed.e

16:00-17:30 Session 13F: Geometry and Network Structure in Optimization
Location: Camellia B
16:00
Snug circumscribing simplexes for convex hulls

ABSTRACT. We propose procedures for enclosing convex hulls of finite m-dimensional point sets with simplexes. These are snug in since they intersect the hull in some way. We report on experimental results.

16:30
Changing Graph Structure for Performing Fast, Approximate Inference in Graphical Models
SPEAKER: Areesh Mittal

ABSTRACT. Complexity of exact marginal inference algorithms in probabilistic graphical models is exponential in the treewidth of the underlying graph. We develop a method to perform approximate inference on discrete graphical models by modifying the graph to another graph with a desirable edge structure. If the new graph structure has low treewidth, then performing exact inference on it becomes tractable. Performing exact inference on the new graph gives an approximate solution to the inference on original graph. We derive error bounds on the approximate inference solution as compared to the exact inference solution. We show that the problem of finding parameters of the new graph which gives the tightest error bounds can be formulated as a linear program (LP). The number of constraints in the LP grow exponentially with the number of nodes in the graph. To solve this issue, we develop a row generation algorithm to solve the LP.

17:00
An extended formulation of the convex recoloring problem on a phylogenetic tree
SPEAKER: Sangho Shim

ABSTRACT. We introduce a strong extended formulation of the convex recoloring problem on a tree, which has an application in analyzing phylogenetic trees. The extended formulation has only a polynomial number of constraints, but dominates the conventional formulation and the exponentially many valid inequalities introduced by Camp^elo et al.(Mathematical Programming 156 (2016) 303{330). We show that all valid inequalities introduced by Camp^elo et al. can be derived from the extended formulation. We also show that the natural restriction of the extended formulation provides a complete inequality description of the polytope of subtrees of a tree. The solution time using the extended formulation is much smaller than that with the conventional formulation. Moreover the extended formulation solves all the problem instances attempted in Camp^elo et al. and larger sized instances at the root node of the branch-and-bound tree without branching.

16:00-17:30 Session 13G: Modeling Data Uncertainty
Location: Verbana
16:00
A Moment Matching Approach for Generating Synthetic Data

ABSTRACT. Synthetic data is becoming increasingly important as a collaboration tool. Current methods for the generation of synthetic data have shortcomings that impact the quality of synthetic data when the objective is to maintain statistical properties of the original data. Many of these methods also require assumptions regarding relationships between variables. We propose a new method for fully synthetic data generation using a moment matching approach. Linear and integer mathematical programming models are used to generate fully synthetic data that matches the original data’s moments up to a specified moment order. We demonstrate this methodology using the Framingham Heart Study and fit a Cox proportional hazards and logistic regression model; parameters were compared between synthetic and original data. Nonparametric models were trained on synthetic data and tested on the original data. An existing chained-equation synthetic data method was compared to our approach. 100% coverage of confidence intervals was achieved for both for Cox and logistic regression models when up to four moments were matched. Matching up to only the second or third moment order resulted in worse coverage. Coverage for four moment matched data was better than for chained-equation synthetic data. Nonparametric models resulted in an accuracy and AUC that did not significantly decrease during testing regardless of synthetic method. In both training and testing, AUC and accuracy were similar for all synthetic data methods. This method inherently has very low disclosure risk and may be considered as an viable alternative to other fully synthetic data generation methods.

16:30
Robust Dynamic Programming Under Contextual Uncertainty About Model Parameters

ABSTRACT. Markov decision processes (MDPs) have been used to design optimal policies that account for uncertainty in many contexts. However, the model parameters in MDPs, including rewards and transition probabilities, are often subject to variation. Any optimal policy obtained using point estimates of the parameters may not be robust to these variations. In this talk, we will present a new Contextual MDP formulation that accounts for uncertainty in model parameters for finite horizon MDPs. We will discuss the complexity of the problem and properties of the model and resulting optimal policies. We will further describe properties of the MDP that lead to an exact algorithm that exploits convexity of the optimal value function by iteratively generating supporting hyperplanes at each decision epoch using backwards induction. We will also discuss a fast approximation algorithm that scales linearly in the number of parameter contexts. We will illustrate the application of these methods to an MDP for primary prevention of myocardial infarction and stroke in patients with type 2 diabetes. The optimal policy defines the optimal time to initiate lipid-lowering agents and antihypertensive medications based on individual risk factors. We will present results for the computation time of the methods we propose, and analyze the performance bounds for the approximation method in this primary prevention setting. We will further compare the optimal policies obtained using these methods to those obtained using to nominal MDP solutions based on point estimates of model parameters. Finally, we will conclude with a discussion of opportunities for future research.

17:00
A Comparison Between the Robust Risk-Aware and Risk-Seeking Managers in R&D Portfolio Management (WITHDRAWN)

ABSTRACT. In this paper, we analyze via computational experiments two mathematical modeling frameworks that reflect different managerial attitudes toward upside risk in the context of R&D portfolio selection. The manager seeks to allocate a development budget between low-risk, low-reward projects, called incremental projects, and high-risk, high-reward projects, called innovational projects. Because of their highly uncertain nature and significant probability of failure, the expected value of the innovational projects is smaller than that of their incremental projects' counterpart, but the long-term financial health of a company necessitates to take risk in order to maintain growth. We study the differences in strategy and portfolio's risk profile that arise between a risk-aware manager, who takes upside risk because he has to for the long-term competitive advantage of his company, and a risk-seeking manager, who will take as big a bet as allowed by the model.