ICS2017: 2017 INFORMS COMPUTING SOCIETY CONFERENCE
PROGRAM FOR SUNDAY, JANUARY 15TH
Days:
previous day
next day
all days

View: session overviewtalk overview

08:30-10:00 Session 3A: Scenario partitioning for solving stochastic programs

Computational methods for solving multistage stochastic programs

Location: Lantana A
08:30
A scalable bounding method for stochastic programming

ABSTRACT. Many dynamic decision problems involving uncertainty can be appropriately modeled as multistage stochastic programs. However, most practical instances are so large and/or complex that it is impossible to solve them on a single computer, especially due to memory limitations. Extending our previous work on two-stage stochastic mixed-integer programs, we consider general multistage stochastic programs and develop a bounding method based on scenario decomposition. This method is broadly applicable, as it does not assume any problem structure including convexity. Moreover, it naturally fits into a distributed computing environment. Computational experiments with large-scale instances (with up to 100 million scenarios, about 1.5 billion decision variables --85% binary-- and 800 million constraints) demonstrate that the proposed method scales nicely with problem size and has immense potential to obtain high quality solutions to practical instances within a reasonable time frame.

09:00
Scenario Set Partition Dual Bounds for Multistage Stochastic Programming: A Hierarchy of Bounds and a Partition Sampling Approach
SPEAKER: Ilke Bakir

ABSTRACT. We propose expected partition (EP) bounds, a hierarchy of bounds based on partitions of the scenario set into subsets of (nearly) equal cardinality. Additionally, using the fact that solution of group subproblems for all subsets in a single partition also yields a dual bound, we establish that the best of even a very small number of such single partition bounds is likely to be better than the corresponding EP bound. By sampling partitions of the scenario set, we obtain an algorithm that computes confidence intervals for an EP bound, while also providing exact dual bounds. The practical value of these ideas is illustrated on benchmark problems, and the approach is compared to Sample Average Approximation.

09:30
Adaptive Scenario Partitions for Stochastic Programs
SPEAKER: Yongjia Song

ABSTRACT. We present a computational study of several strategies to solve stochastic programs with continuous recourse motivated by the idea of scenario partitions. A partition-based formulation is a relaxation of the original stochastic program, obtained by aggregating variables and constraints according to a scenario partition. Partitions are refined dynamically until an optimal solution is found. The refinements are guided by the optimal dual vectors of problems solved at each stage. The idea of partitions can be integrated with regularization such as the level decomposition with on-demand accuracy. Numerical experiments on a large set of test problems including instances with up to one hundred thousand scenarios show the effectiveness of the proposed approaches.

08:30-10:00 Session 3B: Applications of Learning Methods for Decision Making in Healthcare

Learning Methods in Healthcare

Location: Lantana B
08:30
Length of Stay Outlier Management via Mathematical Modelling and Cluster Analysis
SPEAKER: Rema Padman

ABSTRACT. Length of Stay (LOS) is an important metric of care quality and efficiency in hospitals that has been studied for decades. Longer stays lead to increased costs and higher burdens on patients, caregivers, clinicians and facilities. Understanding the characteristics of such outliers is important for developing actionable steps to address LOS reduction by eliminating the unnecessary variations in treatments that result in the higher LOS and costs. In this context, the increasing availability of detailed inpatient data has the potential to enable the development of data-driven approaches that provide novel insights for the management of Length of Stay. This study examines clustering of inpatients using key clinical and demographic attributes to identify LOS outliers and investigates the opportunity to reduce their LOS by comparing their order sequences with similar non-outliers in the same cluster. Learning from retrospective data on 353 pediatric inpatients admitted for appendectomy, we develop a mathematical programming model and a two-stage procedure that first identifies a typical cluster with LOS outliers. Our second stage analysis compares orders pairwise to determine candidates for switching to make LOS outliers similar to non-outliers. Results indicate that switching orders in homogeneous inpatient sub-populations within the limits of clinical guidelines may be a promising decision support strategy for LOS management. These novel data-driven insights can be offered as suggestions for clinicians to apply new evidence-based, clinical guideline-compliant opportunities for LOS reduction through healthcare analytics.

09:00
Physical Activity Pattern Clustering of Physically Inactive Women From The mPED Trial
SPEAKER: Mo Zhou

ABSTRACT. The National Physical Activity Guidelines recommend at least 150 minutes a week of moderate-intensity physical activity. However, approximately half of American adults, particularly women and minorities, do not meet this recommendation. In the mPED trial, we collected baseline physical activity data (e.g., steps, metabolic equivalent of the task) from 215 physically inactive women using a triaxial accelerometer. Clustering methods were applied to a dataset of baseline physical activity in order to explore the effects of intensity and time-of-day of physical activity. Furthermore, we applied clustering methods to clinical and sociodemographic data to identify relationships to physical activity patterns. Results suggest that subjects had a consistent preferred time for doing physical activity; however, different time-of-day for physical activity was not statistically significant. In addition, increasing age was associated with less physical activity and worse clinical outcomes. We briefly describe how these findings can suggest healthy behaviors and be used to develop a personalized physical activity intervention.

09:30
Classification in the Presence of Anchoring Bias: A Model and an Application to Breast Cancer Diagnosis
SPEAKER: Mehmet Ayvaci

ABSTRACT. We examine classification systems that suffer from input noise resulting from anchoring bias. In classification systems that exhibit such a bias, the value of one input - the anchor - influences the value of another input which is also used for classification. We derive the optimal aggregation of attributes and classification under anchoring bias for a linear classifier. We theoretically analyze the impact of bias on the classifier design, its performance, and derive the conditions under which using the anchor attribute for classification is not beneficial. We illustrate the theoretical analysis using real-life data from the breast cancer diagnostic decision context where the radiologist's interpretation of a mammogram is biased by the patient's risk profile information.

08:30-10:00 Session 3C: Clustering and Social Network Analysis

Clustering and Social Network Analysis

Location: Plumeria A
08:30
Clustering via independent union of cliques

ABSTRACT. This talk focuses on the maximum independent union of cliques (IUC) problem in a simple graph, which is to find a maximum-size subset of vertices inducing a  subgraph where every connected component is a complete graph. The number of cliques in a maximum IUC can be thought of as an estimate of the "expected" number of clusters resulting from an unsupervised clustering, and the cliques themselves -- as "seed" clusters that can be further expanded to obtain a clustering.  We present some properties of IUCs and use clique relaxations in hypergraphs to solve the problem computationally. 

08:52
Polynomial Time Instance of the Maximum Weighted Co-2-Plex Problem
SPEAKER: Cynthia Wood

ABSTRACT. The maximum weighted co-2-plex problem (MWC2P) determines a subset of vertices of maximum total weight of a given graph, in which each vertex has degree at most one. We present a study of MWC2P for {claw, bull}-free graphs along with two polynomial time algorithms to solve it. The first algorithm transforms the given graph to solve an instance of the maximum weighted stable set problem utilizing Minty’s algorithm. The second algorithm is an extension of Minty’s algorithm and solves the problem in the original graph. Numerical results are provided to compare the aforementioned algorithms.

09:14
Least Cost Influence Maximization on Trees and Cycles
SPEAKER: Zhang Rui

ABSTRACT. The Least Cost Influence Problem (LCIP) is a fundamental problem concerning influence maximization on social networks. Starting with Kempe, Kleinberg, and Tardos (2003), influence maximization has been widely studied in the literature. It has applications in many areas of economics, business, sociology and medicine including viral marketing and epidemiology. In contrast to earlier models, in the LCIP, an individual can be partially influenced by the use of monetary inducements (i.e., coupons that reduce the price of a product instead of receiving the product for free). The use of tailored (i.e., partial) incentives which allows for differentiated targeting is more natural in a marketing setting. The LCIP is known to be NP-hard. In this work, motivated by the desire to develop a better understanding of the underlying polyhedral for the LCIP, we focus on the LCIP with equal influence among a person’s friends and 100% adoption rate on trees and cycles. We present a tight and compact extended formulation for the LCIP on trees. In other words, we provide an extended linear programming formulation whose projection onto the space of the natural node selection variables is the convex hull of feasible target set vectors. We also project the extended formulation onto the space of the natural (payment) variables that gives the polytope on trees. Next, building upon the result for trees, we discuss the LCIP on cycles.

09:37
Hierarchical Cluster Analysis Via Network Flow Duality
SPEAKER: Eli Olinick

ABSTRACT. Many popular data clustering techniques lack a rigorous foundation in graph theory and optimization even though they are often based on graph and network models of interaction and/or proximity. We show that a clustering method based on the fundamental graph-theoretic concept of density (i.e., sparse cuts separating dense clusters) and implemented via a duality to network flows can produce more comprehensive and meaningful results in appropriate problem domains. The maximum concurrent flow problem (MCFP) is a network flow problem where the objective is to maximize the minimum throughput, i.e., the ratio of the flow delivered for a node pair in comparison to its corresponding demand. When the optimal throughput saturates only the set of critical edges and all others have slack, the hierarchical MCFP (HMCFP) then further maximizes throughput in the slack edges determining a second throughput level and set of critical edges; iterating further, a series of throughput levels is determined until all edges are critical, yielding a stratification portrayed in classification theory as a dendrogram. The duality between sparsest cuts and MCF provides a natural characterization and foundation for the often stated, somewhat vague expression that objects in the same cluster have more affinity (connectivity) to each other and objects in different clusters are less similar (sparsely connected). We propose a new clustering algorithm based on the HMCFP and its duality relation to the sequence of sparsest cuts, and discuss theoretical properties which make it more accurate and often more robust than many popular algorithms in the literature.

08:30-10:00 Session 3D: Computational methods for healthcare decision-making
Location: Plumeria B
08:30
Deterministic Approximation Algorithm For Population-Scale Personal Dietary Management

ABSTRACT. Diet is an important component of wellness and health; however, various fiscal, geographic, and time constraints can make it difficult for families to healthfully manage dietary decisions and purchases. Our work presents a novel mixed-integer formulation for personal management of dietary decisions, and this formulation can be generalized to a new class of Integer Programs with knapsack-like constraints. We apply the idea of pessimistic estimators and randomized rounding to design a deterministic approximation algorithm to solve these problems, and such an approximation algorithm enables scaling the use of our personal dietary management formulation to the broader population.

09:00
Coordinated Patient Appointment Scheduling for a Multi-station Healthcare Network

ABSTRACT. Current healthcare reforms advocate significantly to improve the coordination of services around a patient- centric model. With most patient care delivered through outpatient services, the need to coordinate scheduling between different services in a hospital or co-located clinics becomes central to successful reform. Currently, outpatient services require independent appointment decisions and the coordination is left to the patient. This approach causes several inefficiencies, including an increase in missed appointments and unacceptable access delays. Our paper proposes a multi-station network model that carefully strikes a balance between assumptions that allow tractability and assumptions that disallow real-world adoption. To allow for real world applicability, we consider sequential appointment scheduling in a network of stations with stochastic service times, no-show possibilities, and overbooking. We argue and present evidence that a myopic scheduling policy is close to optimal and is computationally feasible. However, since it is not simple enough for practical implementation, we explore a sequence of approximations and find one that offers a tremendous computational advantage. We also provide several managerial insights and discuss how network structures affect complexity.

09:30
Online Decision-Making with High-Dimensional Covariates
SPEAKER: Hamsa Bastani

ABSTRACT. Big data has enabled decision-makers to tailor decisions at the individual-level in a variety of domains such as personalized medicine and online advertising. This involves learning a model of decision rewards conditional on individual-specific covariates. In many practical settings, these covariates are high-dimensional; however, typically only a small subset of the observed features are predictive of a decision's success. We formulate this problem as a multi-armed bandit with high-dimensional covariates, and present a new efficient bandit algorithm based on the LASSO estimator. Our regret analysis establishes that our algorithm achieves near-optimal performance in comparison to an oracle that knows all the problem parameters. The key step in our analysis is proving a new oracle inequality that guarantees the convergence of the LASSO estimator despite the non-i.i.d. data induced by the bandit policy. Furthermore, we illustrate the practical relevance of our algorithm by evaluating it on a real-world clinical problem of warfarin dosing. A patient's optimal warfarin dosage depends on the patient's genetic profile and medical records; incorrect initial dosage may result in adverse consequences such as stroke or bleeding. We show that our algorithm outperforms existing bandit methods as well as physicians to correctly dose a majority of patients.

08:30-10:00 Session 3E: Networks and Healthcare
Location: Camellia A
08:30
Dynamic Scheduling of Home Health Care Patients

ABSTRACT. Home health care aims at providing personalized medical care and social support to patients within their own home. It is the fast-growing industry in North America and serves over 12 million patients per year. Patients in need of care are assigned to a home care professional (HP), who travels to their home in order to serve them. An HP must have the requisite skills to provide aid, and attends to an individual for the entirety of the service (i.e., continuity of care requirement). We propose a dynamic scheduling framework to assist in the assignment of patients to HPs. The problem is formulated as a Markov Decision Process that maximizes the health care agencies discounted profit over an infinite-horizon. To overcome the “curse of dimensionality”, we define a probabilistic static-assignment policy that is used as the basis for a state-dependent policy improvement heuristic. Under certain regularity conditions, we show that a polynomial-time algorithm provides a bound to the optimal profit. This bound can be iteratively tightened by a logic-based Benders technique that converges to the optimal solution. Several extensions generalize our results to the case where patients return for service, need periodic service (e.g., every other day), and where multiple HPs are assigned to a single patient. Using a dataset from a home care agency operating in southern Ontario, Canada, we show how the model parameters can be estimated using several machine learning techniques and demonstrate how our proposed assignment policies can improve current practice.

09:00
Identification of Essential Proteins Using Induced Stars in Protein-Protein Interaction Networks

ABSTRACT. In this talk, we discuss a novel centrality metric (referred to as star centrality), which incorporates information from the closed neighborhood of a node, rather than strictly the node itself, when calculating its topological importance. More specifically, we focus on degree centrality and show that in the complex protein-protein interaction networks it is a naive metric that can lead to misclassifying protein importance. For the extension of degree centrality when considering stars, we derive its computational complexity, provide a mathematical formulation, and propose two approximation algorithms. We portray the success of the new metric in protein-protein interaction networks when predicting protein essentiality in several organisms, including the well-studied Saccharomyces Cerevisiae, Helicobacter Pyloris, and Homo Sapiens, where star centrality is significantly better at detecting essential proteins when compared to nodal centrality metrics.

09:30
Optimization via Topological Order in Presence of Acyclic constraints

ABSTRACT. We propose mixed integer programming (MIP) models and iterative algorithms based on topological orders to solve optimization problems with acyclic constraints on a directed graph. The proposed MIP models have a significantly lower number of variables compared to popular MIP models based on cycle elimination constraints. The proposed iterative algorithms use gradient descent and iterative reordering approaches, respectively, for searching topological orders. A computational experiment is presented for an optimization problem minimizing the sum of squared errors of regression models over a feature network with application of gene network inference in bioinformatics.

08:30-10:00 Session 3F: Integer Programming and Branch and Bound
Location: Camellia B
08:30
Operating Batteries in the Power Grid via Optimization

ABSTRACT. This talk will describe ongoing work on solving optimization problems designed to address key questions centered on "batteries" on the power grid, such as location, sizing, and policies used to operate batteries to respond to errors in renewable forecast. A key concern is how to model "attribution" of battery response to errors in the forecast of output of specific renewable sites. We will show our proposed optimization models designed to address this concern, and the techniques that, so far, have allowed us to rapidly solve problems on systems with a few thousand buses. However, much remains to be done and we will explore a number of open technical issues as well.

08:52
Integer programming for the optimal design of virtual backbones

ABSTRACT. Two nodes of a wireless network may not be able to communicate with each other directly, perhaps due to obstacles or insufficient signal strength. However, communications could possibly be passed through relay nodes until the message reaches its destination. Often, one designates a (preferably small) subset of the nodes to pass along these transmissions, creating what is called a virtual backbone for the wireless network, which can be represented by a connected dominating set (CDS) of an associated graph. In this talk, we discuss integer programming techniques for constructing minimum CDS's under particular survivability or latency constraints.

09:14
Fine-grained distributed-memory branch-and-bound tree search
SPEAKER: Lluis Munguia

ABSTRACT. In this talk, we present and compare two frameworks for distributed-memory branch-and-bound (B&B) tree search. Both of these are implemented as extensions of PIPS-SBB, which is already a parallel distributed-memory stochastic MIP solver based on the distributed memory stochastic LP solver PIPS-S. As result, both frameworks include two levels of distributed-memory parallelism, for solving LP relaxations and for B&B tree search. The first framework relies on UG, part of the SCIP project, and implements an external coarse parallelization of PIPS-SBB. The second framework, Parallel PIPS-SBB implements a novel internal fine-grained parallelization of PIPS-SBB. We present computational results that evaluate the effectiveness of both frameworks.

09:37
The Slim Branch and Price method with an application to the Hamiltonian p-median Problem
SPEAKER: Ahmed Marzouk

ABSTRACT. We present the Slim Branch and Price (SBP) method which is an improvement over traditional Branch and Price (B&P) in the case of binary master problems. The proposed SBP method is an exact optimization method that can be used to solve a large class of combinatorial optimization problems that can be solved by B&P type algorithms and that have binary master problems with the property that the sum of the variables in any feasible solution is fixed. This is an important class of problems as it includes several classical and fundamental problems. The main advantage in SBP is that the branching tree has only one main branch with several leaves. We illustrate the computational advantage of SBP over B&P on the Hamiltonian p-median problem. In particular, within one hour time limit, SBP can solve to optimality instances with up to 200 nodes; whereas B&P can solve to optimality instances with up to 127 nodes.

10:00-10:30Coffee Break
10:30-11:30 Session 4: Statistics Meets Optimization: Some New Phenomena at the Interface

Plenary Talk by Prof. Martin Wainwright.

Large-scale data sets are now ubiquitous throughout engineering and science, and present a number of interesting challenges at the interface between statistics and optimization.  This talk provides two vignettes in this area.  The first concerns the use of randomized dimensionality reduction techniques, also known as sketching, for obtaining fast but approximate solutions to large-scale convex programs.  We describe a version of sketching that adapts to the intrinsic dimension of the solution space, and show how it leads to randomized version of the Newton algorithm with strong guarantees. The second vignette focuses on low-rank matrix estimation using nonconvex programs, for which it is nonetheless possible to obtain rigorous guarantees on the solution quality.

Location: Primrose C
13:00-14:00 Session 5: Citibike: Continuous, Discrete, and Simulation Optimization

Tutorial by Prof. Shane Henderson

Cornell’s School of Operations Research and Information Engineering has been working with the bike-sharing company Citibike since Citibike began operations in New York City in 2013. We provide data analysis and advice about strategy and operations, not just to Citibike, but also to its parent company Motivate that operates several bike-sharing programs around the USA. I’ll describe some of our modeling work with Citibike, focusing on a suite of models (not just simulation models) that informs the decision about where to position both racks and bikes around the approximately 500 stations in NYC. Our white-box simulation-optimization algorithms can be viewed as heuristics, but they push the state of the art in simulation optimization, and the central principle applies to many other settings.

Location: Primrose C
14:00-15:30 Session 6A: System Models and Image Detection
Location: Lantana A
14:00
Pattern Recognition Inference Engine for IED Detection
SPEAKER: Jorge Buenfil

ABSTRACT. Application of learning capabilities along with adaptive methods enabled a pattern recognition inference engine to be applied to the detection of Improvised Explosive Devices with a man-in-the-loop for machine prediction confirmation and selection of countermeasures. The intended application of this system is to protect military installations from threats in their immediate vicinity with minimal operator workload demands. The system, called ACE for "Army Counter-IED Enhanced" integrates four different kinds of data analysis: image processing, behavioral analysis, pattern recognition, and predictive algorithms to produce a synthesized representation of the area under surveillance highlighting potential threats to the human operator, plus event alerts and alarms that trigger automatic non-lethal responses to contain the threat while security forces proceed to confirm and neutralize it. Sensor Data fusion is achieved with a self-reconfigurable system dynamics model. The system dynamics model uses statistical analysis of prior data to improve its predictions. Preliminary analyses and simulations indicate that this system can accurately determine the presence of improvised explosive devices with low probability of false alarms, high probability of detection, and the ability to automatically improve the sensor network's accuracy over time using supervised learning.

14:23
Multi-Objective Optimization for a Counter-Trafficking System-of-Systems Architecture
SPEAKER: George Muller

ABSTRACT. Modern engineered systems are becoming increasingly complex. This is driven in part by an increase in the use of systems-of-systems and network-centric concepts to improve system performance. Systems-of-systems are comprised of several constituent systems, which are often independently developed and managed. By integrating these constituent systems, stakeholders can achieve new operational capabilities. This also presents new challenges resulting from growing uncertainty and increased complexity.

A key challenge for the system-of-systems approach is estimating overall system performance in uncertain future environments. Uncertainties in this systems engineering challenge often emerge from interactions between the system and its environment. In these conditions, constituent systems adapt or obsolesce based on changes in the operating environment. These interactions induce changes in both the system and its environment in response to each other, a behavior known as coevolution. Understanding how these changes influence system performance enables decision makers to select robust system architectures that meet performance goals given uncertainty about their future performance.

This work presents an approach to optimize the performance of a counter-drug trafficking system-of-systems based on constituent system performance, a stakeholder value model and engineering constraints. The results inform how the system-of-system architecture can best be designed to maximize overall performance while managing constituent system constraints, such as availability, sensor performance and cost. Finally, a robust optimization formulation is provided to guide future research.

14:46
Hypergraphs for Representing Cyber-Physical System Complexities and Dependencies

ABSTRACT. Modern critical infrastructure systems are cyber-physical systems (CPS) that contain cyber and physical components within a unified operational environment. This leads to efficiency in operations but may also create unknown vulnerabilities that may not have existed in exclusively physical or cyber settings. Network/graph-theoretic approaches are helpful in characterizing such component relationships. Due to the complexity of these systems, component relationships may go beyond pairwise (where an edge connects two vertices) dependencies, and a system operator or owner may not even be aware of how or which components are connected together. This talk will focus on how hypergraphs (generalization of graphs where edges may connect any number of vertices) can represent CPS complexities and dependencies. Such representations may yield enhanced criticality and vulnerability analysis and enable decision makers to prioritize among system elements.

15:08
Integrated Epidemiological and Healthcare System Modeling
SPEAKER: Brent Daniel

ABSTRACT. An integrated framework for modeling the epidemiological spread of disease in the context of healthcare treatments and resources will be described. Though the framework is applicable to a broad range of diseases (including directly transmissible and vector-borne) a couple of specific examples will be focused on. The baseline spread of an outbreak will be indicated along with the potential impact of the outbreak on the public health workforce. We will then explore the potential impact of treatment options on case outcomes, the capital and JIT resource requirements of those treatments, and the impact of resource constraints or interruptions on the outbreak's evolution.

14:00-15:30 Session 6B: New Stochastic Models in Healthcare

Recent developments of stochastic models for healthcare. Presentations will discuss issues of consumer choice, intervention design, and health policy analysis. 

Chair:
Location: Lantana B
14:00
Online Personalized Resource Allocation with Customer Choice

ABSTRACT. We introduce a general model of resource allocation with customer choice. In this model, there are multiple resources that are available over a finite horizon. The resources are non-replenishable and perishable. Each unit of a resource can be instantly made into one of several products. There are multiple customer types arriving randomly over time. An assortment of products is offered to each arriving customer, depending on the type of the customer, the time of arrival, and the remaining inventory. From this assortment, the customer selects a product according to a general choice model. The selection generates a product-dependent and customer-type-dependent reward. The objective of the system is to maximize the total expected reward earned over the horizon. The above problem has a number of applications, including personalized assortment optimization, revenue management of parallel flights, and web- and mobile-based appointment scheduling. We derive online algorithms that are asymptotically optimal and achieve the best constant relative performance guarantees for this class of problems.

14:22
Robust Wait Time Estimation in Resource Allocation Systems with an Application to Kidney Allocation

ABSTRACT. In this paper we study systems that allocate different types of scarce resources to heterogeneous allocatees based on pre-determined priority rules, e.g., the U.S. deceased-donor kidney allocation system or the public housing program. We tackle the problem of estimating the wait time of an allocatee who possesses incomplete system information with regard, for example, to his relative priority, other allocatees' preferences, and resource availability. We model the system as a multiclass, multiserver queuing system that is potentially unstable or in transient regime. We propose a novel robust optimization solution methodology that builds on the assignment problem. For first-come, first-served systems, our approach yields a mixed-integer programming formulation. For the important case where there is a hierarchy in the resource types, we strengthen our formulation through a drastic variable reduction and also propose a highly scalable heuristic, involving only the solution of a convex optimization problem (usually a second-order cone problem). We back the heuristic with a tight approximation guarantee that becomes tighter for larger problem sizes. We illustrate the generalizability of our approach by studying systems that operate under different priority rules, such as class priority. We conduct a wide range of numerical studies, demonstrating that our approach outperforms simulation. We showcase how our methodology can be applied to assist patients in the U.S. deceased-donor kidney waitlist. We calibrate our model using detailed historical data to estimate patients' wait times based on their kidney quality preferences, blood type, location and current rank in the waitlist.

14:44
Algorithmic Weight Loss Program Design with Behavioral Analytics
SPEAKER: Yonatan Mintz

ABSTRACT. When designing behavioral interventions it is crucial to take into account the inherent tradeoff between quality of care and intervention cost. In this paper, we develop an algorithm which uses patient data to resolve this tradeoff effectively in the context of weight loss interventions. We expand on a previously developed utility maximization model for patient behavior by utilizing integer programming and Bayesian prediction to evaluate the efficacy of various weight loss interventions and design personalized treatments for each patient. Since the problem of designing personalized treatments becomes a bilevel mixed integer program (BMIP) under our model, a class of problems for which few effective algorithms exist, we present an asymptotically optimal approach for solving this problem using the statistical properties of our behavioral model. We then present an expansive algorithmic framework which combines several candidate personalized treatments into a weight loss program. Finally, we present simulation results which show that our method maintains efficacy while potentially reducing the associated person hours and cost of the intervention as compared to other adaptive methods.

15:07
Data Uncertainty in Markov Chains: Application to Cost-effectiveness Analyses of Medical Innovations
SPEAKER: Joel Goh

ABSTRACT. Cost-effectiveness studies of medical innovations often suffer from data inadequacy. When Markov chains are used as a modeling framework for such studies, this data inadequacy can manifest itself as imprecision in the elements of the transition matrix. In this paper, we study how to compute maximal and minimal values for the discounted value of the chain (with respect to a vector of state-wise costs or rewards) as these uncertain transition parameters jointly vary within a given uncertainty set. We show that these problems are computationally tractable if the uncertainty set has a row-wise structure. Conversely, we prove that the row-wise structure is necessary for tractability. Without it, the problems become computationally intractable (NP-hard). We apply our model to assess the cost-effectiveness of fecal immunochemical testing (FIT), a new screening method for colorectal cancer. Our results show that despite the large uncertainty in FIT's performance, it is cost-effective relative to the prevailing screening method of colonoscopy.

14:00-15:30 Session 6C: New Paradigms for Cut Generation

This session provides an overview of recent work on cutting planes, ranging from theoretical results to computational experimentation.

Location: Plumeria A
14:00
Toward computer-assisted discovery and automated proofs of cutting plane theorems
SPEAKER: Yuan Zhou

ABSTRACT. Inspired by the breakthroughs of the polyhedral method for combinatorial optimization in the 1980s, generations of researchers have studied the facet structure of convex hulls to develop strong cutting planes. We ask how much of this process can be automated: In particular, can we use algorithms to discover and prove theorems about cutting planes? We focus on general integer and mixed integer programming, rather than combinatorial optimization, and use the framework of cut-generating functions (Conforti et al. 2013).

Our approach is pragmatic. Our theorems and proofs come from a metaprogramming trick, applied to a grid-free implementation of the Basu-Hildebrand-Koeppe (2014) extremality test for cut-generating functions in the Gomory-Johnson (1972) model; followed by practical computations with semialgebraic cell complexes. The correctness of our proofs depends on that of the underlying implementation.

We report on the early successes of our software. We have verified various theorems from the literature, found corrections to several published theorems, and discovered new families of cut generating functions and corresponding theorems with automatic proofs. The plan is to produce such new theorems in quantity.

14:22
S-Free Set and Cuts for Polynomial Optimization
SPEAKER: Chen Chen

ABSTRACT. We adopt a geometric approach based on S-free sets or convex forbidden zones; for polynomial optimization we use the specialized term 'outer-product-free'. We develop an intersection cut for generic problems with closed sets, provided a violation distance oracle. Furthermore, we provide some insight into the nature of maximal outer-product-free sets and present two classes of such sets. The intersection cuts associated with these two classes can be strengthened in the case of intersections at infinity. All our cuts can be produced in polynomial-time for polynomial optimization problems. We also analyze the convergence of applying intersection cuts to generic problems.

14:45
How to choose what you lift
SPEAKER: Joseph Paat

ABSTRACT. Minimal cut-generating pairs can be built from S-free sets with the `covering property' by applying a lifting procedure. In our work, we identify situations when minimal cut-generating pairs can be built from S-free sets \textit{without} the covering property. As a tool for examining this question, we introduce the concept of a fixing region. This work was done in collaboration with Santanu Dey from the Georgia Institute of Technology and Amitabh Basu from the Johns Hopkins University.

15:08
Experiments with some cuts from non-split disjunctions
SPEAKER: Thiago Serra

ABSTRACT. We consider the impact of some valid inequalities that cannot be derived from the simplex tableau by the most commonly used methods. Intersection cuts and Gomory cuts are both equivalent to cuts from split disjunctions, which can be obtained by lift-and-project from regular bases of the Cut Generating Linear Program (CGLP). Regular CGLP bases have a direct correspondence to bases from the lower-dimensional linear relaxation of the (Mixed-)Integer Linear Program (MILP) being solved, hence making it considerably cheaper to generate cuts corresponding to such bases by methods other than explicit CGLP formulation. However, such bases represent only a tiny fraction of CGLP bases in general. That raises some questions: (1) how often does irregular CGLP bases arise using lift-and-project on non-split disjunctions; (2) how often can we find a regular CGLP basis yielding the same cut that was obtained from an irregular one; and (3) when there is no correspondence, how much better is this cut that only comes from irregular bases when compared to those that have at least one regular CGLP basis? We answer those questions through computational experiments on generalizations of the split disjunction with more than two terms.

14:00-15:30 Session 6D: Optimal Resource Allocation for Epidemics Management

This session includes talks regarding stochastic and non-linear optimization approaches for resource allocation problems in epidemics.

Location: Plumeria B
14:00
Agent-Based Mediation Modeling of a Controlled Trial to Reduce the Transmission of Multidrug-Resistant Organisms
SPEAKER: Sean Barnes

ABSTRACT. In 2011-2012, the University of Maryland School of Medicine led a 20-site randomized controlled trial to assess the benefits of universal gloves and gowns to reduce the transmission of multidrug-resistant organisms (MDROs) in acute-care hospitals. The use of universal gloves and gowns resulted in a statistically significant decrease in the acquisition rate for methicillin-resistant Staphylococcus aureus (MRSA), but the effect of the intervention, relative to secondary effects such as the hand hygiene compliance of healthcare workers and the visit rates to patients, was unclear. We develop an agent-based model and, after calibration the model across all study sites, we use this model to estimate the reduction in MDRO transmission directly due to this intervention.

14:30
A Nonlinear Programming Framework for Estimating Spatial Coupling in Disease Transmission
SPEAKER: Todd Zhen

ABSTRACT. This work addresses the computational challenges associated with estimation of spatially-coupled dynamic infectious disease models using historical case data. Computational models have become essential for understanding epidemiological patterns and developing the evidence base for decision-making, but require accurate estimates of the parameters that enter into the models. Reliable estimation of spatial coupling parameters in these models requires efficient optimization of extremely large-scale optimization problems that include local dynamics of individuals coupled between regional subpopulations.

In this work, we present a computational modeling framework that is flexible and readily scalable to large problem sizes (e.g., coupled model that includes every major city in a nation). In this framework, the local dynamics of each city are modeled with classic SIR models (discrete or continuous time) while coupling between cities is captured through a variety of city-to-city transport models. Estimation of parameters is done using nonlinear programming with a statistical, hazard-based model considering stochastic fadeout and reintroduction. Solution of this large-scale spatio-temporal estimation problem is enabled through parallel decomposition strategies implemented within the Pyomo modeling framework. Both problem-level and internal decomposition strategies are compared for computational efficiency and scalability.

15:00
Multi-stage Stochastic Programming for the Control and Surveillance of an Agricultural Pest
SPEAKER: Eyyub Kibis

ABSTRACT. The emerald ash borer (EAB), a pest of ash trees native to Asia, has been a threat to North America. More than 20 million ash trees have been killed since the beginning of the infestation, and thousands of them have been removed to slow down its impact. According to the USDA, EAB infestation poses a risk of a $60 billion loss to the U.S. economy. It is forecasted that the infestation has the potential to spread over all of North America by 2030, which can result in killing hundreds of millions of ash trees.

In this study, our objective is to maximize the net benefits of the ash trees on a given landscape by applying surveillance to the ash population, followed by treatment or removal of trees based on their infestation level. Specifically, we propose a new multistage stochastic programming model, which will allow us to consider all possible scenarios for surveillance, treatment, and removal decisions over a planning horizon to control the EAB invasion. Due to the model’s complexity, we present a decomposition algorithm and problem specific cutting planes to facilitate solution. Results provide insights into surveillance and control policies, and provide an optimal strategy to minimize EAB infestation under a limited budget allocation.

14:00-15:30 Session 6E: Parallel Progressive-Hedging-Related Algorithms

Parallel Progressive-Hedging-Related Algorithms

Location: Camellia A
14:00
Extended parallelization of stochastic programming algorithms
SPEAKER: John Siirola

ABSTRACT. One of the key drivers of stochastic programming decomposition algorithms is the observation that stochastic programs are sparse with "nearly" block diagonal structure. Both scenario-based (Progressive Hedging, Lagrangian Decomposition) and stage-based (Bender's Decomposition) decomposition strategies lead to a set of independent sub-problems equal to the original number of scenarios (S), which can be solved in parallel. In this talk we present approaches and implementations that can leverage larger-scale parallel environments that extend beyond standard "S-way" parallelism. We will focus on the Progressive Hedging algorithm and show how solving additional sub-problems can provide supplemental information that can be used to monitor and accelerate the convergence of the main Progressive Hedging algorithm.

14:22
Scenario-Based Decomposition for Parallel Solution of the Contingency-Constrained ACOPF

ABSTRACT. We present a nonlinear stochastic programming formulation for a large-scale contingency-constrained optimal power flow problem. Using a rectangular IV formulation to model AC power flow in the transmission network, we construct a nonlinear, multi-scenario optimization formulation where each scenario considers nominal operation followed by a failure an individual transmission element. Given the number of potential failures in the network, these problems are very large; yet need to be solved rapidly. In this paper, we demonstrate that this multi-scenario problem can be solved quickly using a parallel decomposition approach based on progressive hedging and nonlinear interior-point methods. Parallel and serial timing results are shown using test cases from Matpower,, a MATLAB-based framework for power flow analysis.

14:45
Parallel Branch-and-bound Based On Ph For Stochastic Mips

ABSTRACT. Progressive hedging (PH), though an effective heuristic for stochastic mixed integer programs (MIPs), is not guaranteed convergence in the integer case. Here, we describe BBPH, a branch and bound algorithm that uses PH within each node that, given enough time, will always converge to the optimal solution. In addition to providing a theoretically convergent wrapper for PH applied to MIPs, computational results are given that demonstrate that for some difficult problem instances branch and bound can find improved solutions within a few branches.

15:08
An Asynchronous Parallel Stochastic Programming Algorithm Resembling Progressive Hedging

ABSTRACT. Using a class of monotone operator splitting algorithms recently proposed by Combettes and Eckstein, we derive a stochastic convex programming decomposition algorithm that resembles progressive hedging but can solve subproblems in an asynchronous manner. Only a subset of scenario subproblems need be solved between coordination steps, and subproblem solves may be based on boundedly outdated information. We discuss an initial "master-slave" implementation of this method within the PySP stochastic programming system.

14:00-15:30 Session 6F: Uncertainty modeling and optimization methods for healthcare decisions
Location: Camellia B
14:00
A Data Driven Approach for Assessing and Improving Patient Experience of Care
SPEAKER: Eduardo Perez

ABSTRACT. This research proposes a new framework to assess the performance of hospitals across multiple domains of patients' experiences. We examine whether key systemic characteristics of hospitals are associated with a better experience for patients. In particular, we investigated the effects of nurses and hospitalists activities on patients' ratings of their care. A case study is presented that considers data from the intensive care units from multiple hospitals in Central Texas.

14:30
Quantifying Uncertainty of Healthcare Access Measures Derived by Optimization
SPEAKER: Ilbin Lee

ABSTRACT. Spatial access to healthcare service refers to availability (provider-to-patient ratio) and accessibility (travel distance). It is a manifestation of the dynamics between a service provider network and needs for the service over a geographic area. Recently, linear programs (LPs) have been used to derive a matching between the supply and need, where access measures are formalized as linear functions of solution of the LP. The LP model has uncertainty in its input parameters, which is propagated into the resulting access measures. Quantifying the uncertainty of access measures is critical in making inference on systematic disparities in service access and identification of potential interventions to improve service access. We propose a new computational framework for quantifying the uncertainty of access measures derived by optimization models. The overall approach is solving the LP for sampled input parameters and estimating an empirical distribution of the access measure from the solutions. If we solve the LP for each sampled parameter (called the brute-force), it easily becomes computationally intractable. Thus, we propose a computationally efficient algorithm for solving a linear program with varying input parameters. The experimental results show that our approach can be more efficient and scale better than the brute-force approach. We also demonstrate how the quantified uncertainty of access measure can be used to inform decision-making. Although this talk focuses on the access measure application, our framework for quantifying uncertainty of solutions of linear programs and the batch solution method we suggest are general and applicable to any application using LP.

15:00
Operating-Room Staffing and Online Scheduling

ABSTRACT. In this paper, we consider two problems faced by a Operating Room (OR) manager: (1) how to book surgery scheduling requests that arrive one by one, and (2) how many ORs to plan to staff on a regular basis given that surgical cases are booked one at a time. The former decision is made continuously throughout a day, whereas the latter decision is revisited periodically, e.g. quarterly or annually and it determines the cost of regular staff. Past work on these problems have used two approaches: (1) Full knowledge: where it is assumed that all cases that need to be scheduled on a given day or the distributions are known completely, and (2) Adversarial setting: where absolutely no knowledge of future arrivals in assumed. Both these settings are not realistic as there is usually some partial information available. In this paper, we use a robust optimization approach to address this gap in the literature. In the process, we also develop an easy-to-implement surgery-booking algorithm with provable performance bounds. In particular, we consider a class of scheduling algorithms known as "interval classification" algorithms and show that we can calibrate the parameters of these algorithms in a tractable manner. We also demonstrate how our approach can be easily extended to capture many preferences of surgeons. Finally, we test the suggested algorithm on our data and find that it performs significantly better than several alternatives.

15:30-16:00Coffee Break
16:00-17:30 Session 7A: Radiation Therapy Treatment Planning I
Chair:
Location: Lantana A
16:00
A new framework for Tomotherapy treatment planning and delivery
SPEAKER: Wilmer Henao

ABSTRACT. The goal of tomotherapy is to deliver a prescribed dose of radiation to malignant tumors while trying to avoid the delivery of toxic doses to healthy organs. A binary multi-leaf collimator sits on a gantry fan and rotates around the patient’s body shuttering a row of binary leaves; it lets radiation through when a leaf is open and blocks radiation when a leaf is closed. Conventional treatment calculates dynamic gantry speeds and dose rates to be delivered at discrete stages of the rotational path of the gantry. We call these discrete stages “projections.” Dose rates are determined by the time the leaves remain open at each of these projections. Because of this, leaves have to open and close at every stage at which any dose is delivered. Doses are calculated using Fluence Map Optimization and in practice the speed of the pneumatic leaf drive is slow and causes inaccuracies in actual dose deliveries. We propose a treatment model that explicitly controls the number of leaf events by requiring leaves to stay completely open or closed at each projection. The model employs a finer discretization along the gantry path, the result is a treatment plan that represents dose effects more realistically, but which results in a large-scale combinatorial problem. We therefore implement a column generation methodology that attains high-quality solutions. The new framework reduces the number of leaf events, diminishes inaccuracies in dose delivery, and achieves faster treatment times since the gantry is not anchored at the slowest speed.

16:22
CCP Optimization with Time-Dependent Uncertainty in Radiation Therapy Planning

ABSTRACT. Tumor shrinkage or proliferation due to the radiation therapy of cancer affects the amount of delivered dose, and it can has a detrimental effects on the surrounding healthy tissues. Adaptive radiation therapy (ART) is a sequential radiation treatment process in which the treatment plan can be modified using a systematic feedback of measurements. Previously proposed approaches aimed at using ART so as to have a periodic re-imaging of the tumor geometry and use it for adapting the treatment plan accordingly. In this study, we propose a stochastic model under a probabilistic description of dose delivery considering the tumor geometry change during the gap periods in multiple fractions of an ART treatment planning. In order to handle the setup uncertainty in each fraction of radiation therapy (RT), a chance constrained programming (CCP) framework is developed under distributional assumption of dose contribution. We evaluate the model in the context of tumor coverage and Normal tissue sparing and numerically demonstrate its superiority over the traditional treatment planning.

16:45
Threshold-driven Optimization for Reference-based Auto-planning
SPEAKER: Troy Long

ABSTRACT. Radiation therapy treatment plan optimization determines a clinically desirable treatment plan given physician-driven criteria. Conventional planning involves a planner iteratively consulting a physician until an acceptable treatment plan is found. This process can be potentially time-consuming for planners and physicians due to the guess-and-check nature of manually exploring the Pareto frontier of deliverable plans.

Physician preference along with the knowledge gained from previously treated patients is often distilled into in reference dose-volume histogram (DVH). We study the procedure of reference-DVH-based auto-planning for treatment plan optimization meant to bypass the trial-and-error nature of conventional planning. We develop a threshold-driven optimization methodology for automatically generating a clinically desirable intensity-modulated radiation therapy treatment plan. The framework uses a dynamically-updated spatial assignment of the reference DVH on the current patient geometry to generate a target dose distribution. Voxel-based quadratic penalty functions guide the auto-planning fluence map optimization subroutine. These functions have three components: an overdose weight, and underdose weight, and some target dose threshold. The proposed methodology directly relates reference information to threshold values, which influence the optimization in an effective, intuitive way.

This auto-planning technique has the potential to be effective for auto-planning based on reference DVH information. As dose prediction and data-driven planning becomes more prevalent in the clinical setting, incorporating such data into an automated treatment planning model in a clear, effective way will be crucial for efficient planning.

17:08
Objective Selection in Cancer Treatment Using Inverse Optimization

ABSTRACT. A challenge in radiation therapy treatment planning is selecting which clinical objectives to use in the optimization. We propose the inverse optimization method with a cardinality constraint to infer the most important objectives from historical treatment plans. Computational observations show the inverse problem behaves supermodular in the set of chosen objectives, based on which we use a greedy algorithm to solve the problem approximately. We compare the proposed method to the cardinality-constrained integer inverse problem, and show that our method efficiently finds a small number of objectives that generate clinical quality treatment plans. Our method was demonstrated in prostate IMRT treatment planning.

16:00-17:30 Session 7B: Advances in Theory and Applications of IPCO
Location: Lantana B
16:00
A Parallel Algorithm with Enhancements via Partial Objective Value Cuts for Cluster-based Wireless Sensor Network Design
SPEAKER: Halit Uster

ABSTRACT. Data-gathering wireless sensor networks (WSNs) are operated unattended over long time horizons to collect data in several applications such as those in climate monitoring and a variety of ecological studies. Typically, sensors have limited energy (e.g., an on-board battery) and are subject to the elements in the terrain.

We consider an integrated topology control and routing problem in data-gathering WSNs. After presenting a mixed-integer model for the problem, for its solution, we develop an effective parallel algorithm that incorporates three parallelization strategies, namely low-level parallelism, domain decomposition, and multiple search (both cooperative and independent) in a single master-worker framework. For improved algorithmic efficiency, we introduce three reduced subproblems and devise partial objective value cuts from these reduced models. We observe that the reduced models provide valuable information on the optimal design variables for the original model and we exploit this fact in our parallel algorithm.

Our overall parallelization scheme utilizes exact optimization models and solutions as its components and allows cooperation among multiple worker processors via communication of partial solution and cut information. Our computational results illustrates that the parallel implementation not only achieves a speed-up in computations but also yields better solutions as it can explore the solution space more effectively.

16:30
Optimal Clustering on a Graph

ABSTRACT. We study a clustering problem defined on an undirected graph in which an edge-weight function denotes the importance of the connection between pairs of vertices. We remove a subset of edges to break the graph into a number of smaller pieces, i.e., clusters. Subject to a constraint on the total weight of edges that are removed, we maximize the number of clusters we form. We introduce an extended formulation of our graph clustering problem to derive a polynomial-sized mixed integer program. Then we solve the problem by employing a set covering variant of Benders decomposition algorithm. We also propose a bicriteria graph clustering problem, in which we maximize the number of clusters we form while minimizing the weight of deleting edges. Solving this bicriteria problem parametrically identifies the solutions that lie on the concave envelope of the efficient frontier, and the breakpoints on the concave envelope of the efficient frontier are nested. We solve the parametric model in polynomial time by solving a sequence of parametric maximum flow problems, which yields a family of nested clusters on the efficient frontier.

17:00
Maximum Demand Rectangular Location Problem
SPEAKER: Manish Bansal

ABSTRACT. Several classes of problems related to locating service facilities to cover the demand for service have been studied over the years. A well-known class is the planar maximum coverage location problem (MCLP), in which p service facilities with known service range are to be located anywhere in the continuous plane such that the total covered demand is maximized. It is important to note that in planar MCLP with rectilinear distance, the service zone of a facility with d range of coverage is a square rotated 45 degrees with diagonals of length 2d centered at the facility. Also, it is assumed that the coverage in this problem is binary, i.e., the demand zone is assumed to be either totally covered or not covered at all. We introduce a new generalization of the MCLP by positioning a given number of rectangular service zones (SZ) on the 2-D plane to cover a set of existing (possibly overlapping) rectangular demand zones such that the total covered demand is maximized. We refer to this problem as Maximum Demand Rectangular Location Problem (MDRLP) which also has applications in camera-frame selection for telerobotics. First we show that MDRLP is NP-hard if the number of SZs is part of the input. We then present an improved algorithm for the single SZ MDRLP, which is at least two times faster than the existing exact algorithm. Next, we provide theoretical properties for multi-SZ MDRLP and an exact algorithm to solve it along with our computational results.

16:00-17:30 Session 7C: Computational Decision Analysis: Information and Decision Making

-

Chair:
Location: Plumeria A
16:00
Optimal Discretization for Decision Analysis

ABSTRACT. Decision analysts often discretize uncertainties to ease communication with clients and simplify calculations of certain equivalents. We introduce a novel mathematical formulation for selecting an optimal discretization for a set of decision problems. We also derive tractable instances of this optimization problem. These tractable instances allow us to compute optimal discretizations for a specified set of decision problems. In addition, we introduce novel joint discretizations. Prior discretization methods produce discretizations for each uncertainty in the decision problem, and assume independence between the discretized distributions. Joint discretizations are not independent across the uncertainties and can provide significant increases in accuracy for computing certain equivalents, while still maintaining the simplicity and ease of communication. Finally, we show that our methodology for computing independent and joint discretizations outperform prior methods across a set of computational examples.

16:30
Capturing the Sponsorship Outcome Using Social Media: Effect of Contextual Factors on Sponsorship Outcome
SPEAKER: Kapil Kaushik

ABSTRACT. Sponsorship is a commercial activity used by firms to improve their image and reputation. Firms use social media platform to leverage the positive effects of sponsorship. Consumer shares their opinions and feelings about firm’s association with sponsored events. Hence, it provides good opportunity to researchers and firms to measure the effectiveness of sponsorship activities of firms. In this work, we have developed a social media metric to capture the sponsorship outcome. This study has also explored the effect of contextual factors (i.e. event favorability, sponsor-event fit, social media presence of event, social media presence of brand) on the sponsorship outcome. For achieving this, we have selected 10 major sports events of 2016. Using twitter platform, we tracked the sentiment of sponsors of these events to examine the effect of event reputation on sponsors’ sentiment. Results suggested that event favorability, sponsor-event fitness, and social media presence of brand positively affects the sponsorship outcome. However, there was no effect of event’s social media presence on sponsorship outcome. Hence, to leverage the effects of sponsorship activities, companies should focus on increasing their presence on social media. To receive favorable response from users, companies should sponsor the events that shares congruence with the company’s image and function. Companies should also consider the favorability of event while taking decision about sponsorship because event’s word of mouth on social media influences the consumer’s response to sponsorship activity of company.

17:00
Procurement Auctions with Uncertain Buyer Preferences
SPEAKER: Jay Simon

ABSTRACT. When issuing calls for proposals for large-scale goods or services, particularly those involving new technologies, buyers often do not know what their preferences will be at the time that the winning bid will be selected. However, buyers must still provide guidance to vendors interesting in submitting a bid. This work examines possible buyer strategies for three different auction models in which the buyer's preferences are uncertain.

16:00-17:30 Session 7D: Novel Optimization Algorithms
Location: Plumeria B
16:00
Risk Assessment using Utility Theory for Robust Goal Programming
SPEAKER: Robert Hanks

ABSTRACT. We investigate interval-based and norm-based uncertainty sets using cardinality-constrained robustness in the Robust Goal Programming (RGP) construct in addition to strict robustness using ellipsoidal uncertainty sets. Then, using utility theory, a decision maker’s (DM) view of risk is quantified via a utility function, which will be mapped back to relevant parameters of the varying uncertainty sets to model the DM’s risk attitude toward a robust solution. The findings offer theoretical contributions to the RGP framework and will be applied in a future endeavor to setting shipping rates for the United States Transportation Command’s customers as it pertains to revenue management.

16:22
A Constant Factor Bounding Algorithm for the Dubins Traveling Salesman Problem

ABSTRACT. Given a set of target points on a ground plane and a constant (>0), the Dubins Traveling Salesman Problem (DTSP) aims to find a path such that each target is visited at least once, the radius of curvature of any point in the path is at least equal to the constant and the length of the path is minimal. This problem is a generalization of the Euclidean Traveling Salesman Problem and is NP-hard. The DTSP belongs to a class of task allocation and path planning problems that commonly arises for a team of unmanned aerial vehicles in surveillance applications. The DTSP has received significant attention in the literature mainly due to its importance in unmanned vehicle applications, the simplicity of the problem statement, and its status as a hard problem to solve because it inherits features from both combinatorial optimization and optimal control.

Specifically, the DTSP is challenging because it involves finding an optimal heading angle of the vehicle at each target and an optimal sequence in which the targets must be visited. Currently, there is no procedure for finding an optimal solution or algorithms that can find feasible solutions with constant factor approximation guarantees for the DTSP. In this work, we address these challenges and present the following: • A novel cutting plane algorithm that provides a constant factor bound (4.21) for the DTSP. • Numerical results which show that the algorithm finds near-optimal solutions quickly for hundreds of problem instances with at most 40 targets in each instance.

16:44
An Approximate Dynamic Programming Approach to the Shoot-Look-Shoot Weapon-Target Assignment Problem
SPEAKER: Darryl Ahner

ABSTRACT. Several weapon target assignment problems are addressed in the literature. Among them are the classic weapon-target assignment, shoot-look-shoot, two-stage stochastic, and multi-phase intercept. We consider a weapon-target assignment problem where the weapons are launched at targets sequentially over two stages known as the shoot-look-shoot problem. The optimal approximate dynamic programming algorithm for the stochastic weapon-target assignment problem is expanded to account for this nonconvex integer problem to obtain quality solutions using Monte Carlo sampling and a subgradient functional approximation approach to the second stage allocation.

17:07
Approximate Dynamic Programming for the Dispatch of Military Medical Evacuation Assets

ABSTRACT. Military medical planners must consider the dispatching of aerial military medical evacuation (MEDEVAC) assets when preparing for and executing major combat operations. The launch authority seeks to dispatch MEDEVAC assets such that prioritized battlefield casualties are transported quickly and efficiently to nearby medical treatment facilities. We formulate a Markov decision process (MDP) model to examine the MEDEVAC dispatching problem. The large size of the problem instance motivating this research suggests that conventional exact dynamic programming algorithms are inappropriate. As such, we employ approximate dynamic programming (ADP) techniques to obtain high quality dispatch policies relative to current practices. An approximate policy iteration algorithmic strategy is applied that utilizes least squares temporal differencing for policy evaluation. We construct a representative planning scenario based on contingency operations in northern Syria both to demonstrate the applicability of our MDP model and to examine the efficacy of our proposed ADP solution methodology. A designed computational experiment is conducted to determine how selected problem features and algorithmic features affect the quality of solutions attained by our ADP policies. Results indicate that the ADP policy outperforms the myopic policy (i.e., the default policy in practice) by up to nearly 31 percent with regard to a lifesaving performance metric, as compared for a baseline scenario. Moreover, the ADP policy provides decreased MEDEVAC response times and utilization rates. These results benefit military medical planners interested in the development and implementation of cogent MEDEVAC tactics, techniques, and procedures for application in combat situations with a high operations tempo.

16:00-17:30 Session 7E: Optimization in cancer treatment design

This session covers the application of optimization theory and computation to cancer treatment design. 

Chair:
Location: Camellia A
16:00
Identifying Unacceptable Radiotherapy Treatments with Robust Optimization
SPEAKER: Allen Holder

ABSTRACT. Radiotherapy treatments are assessed per patient on several metrics. However, the assessment values are somewhat suspect due to numerous imperfections in modeling and treatment delivery. We develop a robust adaptation of data envelopment analysis to distinguish quality treatments from their lesser counterparts. The robust model is nonconvex and is solved by a tailored algorithm. Comparisons on real data promote the model's intent and application.

16:30
An integer programming approach to optimal cancer treatment design
SPEAKER: Qie He

ABSTRACT. Over the past decade, several targeted therapies (e.g. imatinib, dasatinib, nilotinib) have been developed to treat Chronic Myeloid Leukemia (CML). Despite an initial response to therapy, drug resistance remains a problem for some CML patients. Recent studies have shown that resistance mutations that preexist treatment can be detected in a substantial number of patients, and that this may be associated with eventual treatment failure. One proposed method to extend treatment efficacy is to use a combination of multiple targeted therapies. However, the design of such combination therapies (timing, sequence, etc.) remains an open challenge. In this work we mathematically model the dynamics of CML response to combination therapy and analyze the impact of combination treatment schedules on treatment efficacy in patients with preexisting resistance. We then propose an optimization problem to find the best schedule of multiple therapies based on the evolution of CML according to our ordinary differential equation model. This resulting optimization problem is nontrivial due to the presence of ordinary different equation constraints and integer variables. Our model also incorporates realistic drug toxicity constraints by tracking the dynamics of patient neutrophil counts in response to therapy. We determine optimal combination strategies that maximize time until treatment failure on hypothetical patients, using parameters estimated from clinical data in the literature.

17:00
A Matrix Selection Problem
SPEAKER: Zeyang Wu

ABSTRACT. Consider a discrete-time dynamic system $x_{k+1} = M_{k+1} x_{k}$ for $k=0, 1,\ldots,T-1$. The matrix $M_k$ can be chosen as one of the given two matrices $A$ or $B$. The matrix selection problem is defined as follows: given an initial vector $x_0$, select matrices $M_1, \ldots, M_T$ such that a linear function over the states $x_1, \ldots, x_{T}$ is minimized. This problem is motivated by an application in cancer treatment planning. We formulate this problem as a mixed-integer linear program, and derive several families of facet-defining inequalities for the resulting formulation.

16:00-17:30 Session 7F: Computational Decision Analysis: Systems, Information, and Decision Making
Location: Camellia B
16:00
Stochastic Network Interdiction With Incomplete Preference

ABSTRACT. We study two different cases of the stochastic shortest path interdiction problem with incomplete preferences. In the first case, the defender makes an interdiction decision, random network costs are realized, and then the attacker chooses his path. In the second case, the decisions of both players are made before the realization of randomness. We consider the situation where the underlying utility functions of the decision makers are ambiguous. This ambiguity is modeled using historical data in the form of a set of pair-wise comparisons of random variables made by the decision makers. In the first case, we consider that the defender does not know her utility function exactly, and uses a robust optimization approach by maximizing the worst-case utility in the ambiguity set. In the second case, we consider that the leader models the attacker's decision using the ambiguity set of the attacker's utility function. We use a minimax formulation for the defender to minimize the attacker's worst-case utility. We present numerical results based on randomly generated instances to show the performance of the models.

16:22
Integrating statistical forecasting and agent-based models for enhanced understanding of post-hurricane electric-power system reliability

ABSTRACT. Hurricanes and other high-intensity wind events often cause widespread and prolonged electric-power outages. The action taken by community members prior to a storm, such as purchasing generators or collectively pressuring law-makers into requiring system hardening, is a strong determinate in how they will fare after storms. However, the choices individuals make depend on personal beliefs on the likelihood of losing power, the effectiveness of collective pressure, and their ability to cope with lengthy outages. Further, purchasing a generator may come at the expense of collective action and thus reduce the demand for system hardening. By using a validated power-outage forecasting model for high-intensity wind events in conjunction with an agent-based simulation model, we characterize how a community’s likelihood of losing power after repeated hurricanes is affected by complex interactions among individuals’ behavioral responses.

16:44
Modeling Intervention Strategies for Complex Patients with Sepsis
SPEAKER: Julie Ivy

ABSTRACT. Sepsis, infection plus systemic manifestations of infection, is the leading cause of in-hospital mortality. About 700,000 people die annually in US hospitals and 16% of them were diagnosed with sepsis. In addition to being deadly, sepsis is the most expensive condition associated with in-hospital stay, resulting in a 75% longer stay than any other condition. The total burden of sepsis to the US healthcare system is estimated to be $20.3 billion, most of which is paid by Medicare and Medicaid. Sepsis is difficult to diagnose and requires timely intervention. The presence of comorbidities (i.e., the presence of one or more preexisting conditions) often complicates diagnosis and treatment options. This research uses inpatient electronic health record (EHR) data over multiple visits from a large hospital system to develop a simulation-based framework to personalize sepsis intervention strategies considering the impact of comorbity. Comorbidity clusters for race, age and gender groups are identified using hierarchical cluster analyses. The Predisposition-Infection-Response-Organ Failure (PIRO) score is one metric used to stage sepsis. We use the PIRO score to quantify patient severity and use simulation to capture the static attributes (e.g., age, certain pre-existing conditions such as malignancy) as well as dynamic attributes (e.g., respiratory rate, heart rate) of PIRO point-score in representative comorbidity-cluster patient subpopulations. The PIRO score elements are simulated over time to study disease severity and patient outcomes including patient stability and disposition as a function of intervention strategies.

17:07
Optimal Screening for Hepatocellular Carcinoma under Limited Resources

ABSTRACT. We model the problem of screening a population at risk for hepatocellular carcinoma when (1) the disease of each patient evolves stochastically over time, and (2) there are limited screening resources shared by the population. We derive an optimal policy for a special case of this problem using a restless bandit model, and provide insights into what characterizes effective screening. We then compare the performance of the optimal policy against current screening practices in a simulation built upon historical patient data.