ICS 2015: INFORMS COMPUTING SOCIETY CONFERENCE 2015
PROGRAM FOR MONDAY, JANUARY 12TH
Days:
previous day
next day
all days

View: session overviewtalk overview

08:30-10:00 Session 14A: Stochastic Optimization: Stochastic Optimization in Transportation and Logistics
Location: Salon A
08:30
The Dynamic and Stochastic Container Loading Problem
SPEAKER: Tianke Feng

ABSTRACT. The Dynamic and Stochastic Container Loading Problem (DSLP) is defined as follows: A given number of containers of identical size, but random weight, arrive according to a stochastic process. Either before or at a given deadline, the containers must be loaded, one at a time. The order in which the containers are loaded is constrained and defines the objective of balancing the container load along a given axis. We analyze this problem under dynamic conditions and explore the benefits of loading containers before all containers have arrived. A two-stage integer programming based heuristic (TSIP) is developed to solve the resulting sequential decision problem. This DSLP has applications in aircraft and ship loading, especially when departures are scheduled.

08:50
Solving a distribution problem with stochastic programming, and an application to ground delay program planning.
SPEAKER: Alex Estes

ABSTRACT. We consider a production distribution problem in which a resource is manufactured at several sites and refined at a single facility. The problem is to decide how much resource should be stored at the production site and how much should be shipped to the refinery in each time period with the goal of minimizing storage, shipping and production costs. We formally describe this problem, and show that it may be modeled with a stochastic linear or integer program. We provide criteria in which the optimal solution of the linear program is integral. We demonstrate that this may be applied to the air traffic management problem of planning a ground delay program.

09:10
Pruning algorithms to determine reliable paths on networks with random and correlated link travel times

ABSTRACT. This study address the problem of determining optimal reliable paths on networks under uncertainty, where link costs/times are random variables. The reliability is measured as a weighted combination of mean and standard deviation of travel time which is termed as robust-cost. The minimum robust-cost path (MRCP) problem is formulated and solved in presence of unrestricted link travel time correlations. This problem is difficult to solve because of the non-linear objective function, and link travel time cross-covariances which are unrestricted in sign. Consequently, the solution approaches based on subpath optimality and subpath nondominance principles adopted in the literature are not valid. In this light, a pseudo-polynomial algorithm based on a subpath pruning approach is proposed, which eliminates subpaths which cannot be part of the optimal using a “pruning criterion”. The novel pruning criterion is constructed using a network transformation and efficient shortest path algorithms. The proof of correctness of the algorithm is given, and the computational experiments on real world networks demonstrate good performance with practical polynomial complexity. The algorithmic procedure is also generalized with suitably modifications to solve related problems which include, the MRCP problem with independent link travel times and K-best robust-cost paths (K-RCP) problem. The proposed algorithm was found to outperform non-dominated path-set generation algorithm under link independence. The superior performance of algorithm for K-RCP compared to K least expected travel time paths heuristic is also established. Finally, the algorithm is demonstrated on an illustrative application, which is a multi-objective minimization problem with a quadratic objective function, asserting its potential wider applications.

09:30
A Stochastic Approach to the Management of Plug-In Hybrid Electric Vehicle Swap Stations

ABSTRACT. Optimizing operations at plug-in hybrid electric vehicle (PHEV) battery swap stations is internally motivated by the movement to make transportation cleaner and more efficient. We consider the problem of determining the optimal management of PHEV battery swap stations when faced with uncertain demand, dynamic charge prices, and the option to discharge batteries back to the power grid for load balancing initiatives. We model this problem as a Markov decision problem and solve for the optimal policy using backwards induction. Further, we compare the optimal policy to an easy to implement benchmark policy. The results of the computational experiments lead to an optimal monotone policy which we theoretically verify. Policy insights regarding the management of PHEV swap stations are drawn from the promising computational results.

08:30-10:00 Session 14B: Health Applications: Biology
Location: Salon B
08:30
Ordination of Human Microbiome Data with L1-Norm PCA
SPEAKER: Paul Brooks

ABSTRACT. The human microbiome is the community of microorganisms that reside in various body habitats. Advances in next-generation sequencing technology and mass spectrometry measurements have led to multi-omic studies of host-microbiome interactions and their relationship with health and disease mechanisms. Microbial ecologists often use ordination methods to view data in a best-fitting two- or three-dimensional subspace to discover patterns. Principal component analysis (PCA) may be viewed as a baseline method for ordination. Traditional PCA is sensitive to outlier observations, motivating the need for robust PCA methods. In this talk, we present a robust PCA method based on estimating the L1-norm best-fit line. Though the optimization problem associated with finding the L1-norm best-fit line is nonlinear, nonconvex, and nondifferentiable, we present an estimation method based on solving a small number of linear programs. Performance on simulated data and human microbiome data indicate that the method is suited for outlier-contaminated data.

08:50
Cartesian Product Matrix Decomposition for Metabolic Reconstruction
SPEAKER: Onur Seref

ABSTRACT. In this study, we introduce a novel strongly polynomial algorithm to decompose an input matrix of alternate extreme optimal flux states from a real-life metabolic reconstruction problem into independent subsets of reactions and their respective alternate solutions. The input matrix can be reconstructed as a Cartesian product of these subsets of alternate solutions. We present computational properties of the proposed decomposition method on synthetic matrices. We also introduce a pivoting-based algorithm for efficient enumeration of alternate extreme optimal flux states. We apply our solution enumeration and decomposition methods to model Trypanosoma cruzi and reveal its metabolic pathways that are independently regulated to sustain optimal growth.

09:10
An Agent-Based Model to Evaluate the Effectiveness of Antibiotic Stewardship Programs
SPEAKER: Sean Barnes

ABSTRACT. The transmission of multidrug-resistant organisms (MDROs) is a significant problem in hospitals in the U.S. and around the world. Many modeling efforts have focused on modeling the transmission dynamics of MDROs and evaluating the effectiveness of various control measures such as improving the hand hygiene compliance of healthcare workers or enforcing precautions for contact with patients who are active carriers or at high risk for acquisition.

We develop an agent-based model to evaluate the effectiveness of antibiotic stewardship programs. These programs target the prescribing frequency of antibiotics in order to reduce the incidence of infections caused by resistant organisms, which are much more difficult to treat than their susceptible forms. We show that reducing the proportion of admitted patients who receive antibiotics can lead to lower acquisition rates of several resistant organisms. We also show that this reduction is much more substantial for more prevalent MDROs such as Methicillin-resistant Staphylococcus aureus and Vancomycin-resistant enterococci.

09:30
Robust Protein Structure Alignment
SPEAKER: Allen Holder

ABSTRACT. Optimally aligning protein structures is a long standing problem in computational biology. We show that aligning spectrally encoded structures with dynamic programming is robust against measurement error and parametric tuning. Results show that the optimization procedure maintains an approximate 80% accuracy even if experimental tolerances are magnified 100-fold.

08:30-10:00 Session 14C: Integer Programming: Recent Progress in Stochastic Programming Models and Algorithms II
Location: Salon D
08:30
Cut Generation for Optimization Problems with Multivariate Risk Constraints

ABSTRACT. We consider a class of multicriteria stochastic optimization problems that features benchmarking constraints based on conditional value-at-risk and second-order stochastic dominance. We develop alternative mixed-integer programming formulations and solution methods for cut generation problems arising in optimization under such multivariate risk constraints. We give the complete linear description of two non-convex substructures appearing in these cut generation problems. We present computational results that show the effectiveness of our proposed models and methods.

08:50
Strengthened Benders Cuts for Stochastic Integer Programs with Continuous Recourse
SPEAKER: Merve Bodur

ABSTRACT. With stochastic integer programming as the motivating application, we investigate techniques to use integrality information to obtain improved cuts within a Benders decomposition algorithm. We consider two options: (i) cut-and-project, where integrality information is used to derive cuts in the extended variable space, and Benders cuts are then used to project the resulting improved relaxation, and (ii) project-and-cut, where integrality information is used to derive cuts directly in the projected space defined by Benders cuts. We analyze the use of split cuts in these two approaches, and demonstrate that although they yield equivalent relaxations when considering a single split, cut-and-project yields stronger relaxations in general when using multiple splits. Computational results illustrate that the difference can be very large, and demonstrate that using split cuts within the cut-and-project framework can significantly improve the performance of Benders decomposition.

09:10
Conditional Chance Constraints and Applications
SPEAKER: Ruiwei Jiang

ABSTRACT. In this talk, we propose a conditional chance constraint (CCC) model that attempts to protect a linear constraint under uncertainty when undesirable conditions are realized. We motivate this model in several service operation applications, investigate a sample average approximation method for handling a class of CCCs, and develop a solving algorithm based on an integer programming approach. Finally, we test our model and approach in numerical simulations.

09:30
Optimization Under Uncertainty and Unit Commitment with Renewable Energy Integration
SPEAKER: Yongpei Guan

ABSTRACT. In this talk, we report our research progress on developing efficient chance constrained stochastic optimization, robust optimization, and other optimization under uncertainty techniques to solve unit commitment problems with the renewable energy integration. We discuss the advantages and disadvantages of each approach and how they can be applicable in practice. This is the joint work with Qianfan Wang, Ruiwei Jiang, Chaoyue Zhao, and Kai Pan.

08:30-10:00 Session 14D: Simulation: Algorithmic Optimal Learning
Chair: Jie Xu
Location: Salon E
08:30
The Knowledge Gradient Policy using a Sparse Additive Belief Model
SPEAKER: Yan Li

ABSTRACT. We propose a sequential learning policy for ranking and selection (R\&S) problems with high dimensional sparse belief functions, where there are hundreds or even thousands of features, but only a small portion of these features contain explanatory power. This policy is a unique and novel hybrid of Bayesian R\&S with frequentist learning. We derive a knowledge gradient policy for sparse linear models (KGSpLin) with group Lasso penalty. Particularly, our method naturally combines B-spline basis expansion and generalizes to the nonparametric additive model (KGSpAM) and functional ANOVA model. Theoretically, we provide the estimation error bounds of the posterior mean estimate and the functional estimate. Controlled experiments show that the algorithm efficiently learns the correct set of nonzero parameters even when the model is imbedded with hundreds of dummy parameters. Also it outperforms the knowledge gradient for a linear model.

08:50
Optimal learning in regression with combinatorial feature selection
SPEAKER: Bin Han

ABSTRACT. In many applications of business analytics, regression is used to learn about a set of unknown parameters, and alternates with optimization to design new data points. In particular, we are motivated by the problem of efficiently designing effective strategies for non-profit fundraising (e.g., donor retention campaigns by the American Red Cross). While the framework of ranking and selection (R&S) can be used to handle such problems, traditional R&S procedures may experience high computational costs when the decision space grows combinatorially. We develop a new value of information procedure for simultaneously learning unknown regression parameters and unknown sampling noise. We then develop an approximate version of the procedure, based on convex relaxation and semidefinite programming, that retains good performance and scales better to larger problems.

09:10
Multi-fidelity Optimization with Ordinal Transformation and Optimal Sampling
SPEAKER: Jie Xu

ABSTRACT. Simulation optimization can be used to solve many complex optimization problems intractable to classical optimization methods. We propose a new framework to perform efficient simulation optimization when simulation models with different fidelity levels are available. The framework consists of two novel methodologies: ordinal transformation (OT) and optimal sampling (OS). The OT methodology uses the low-fidelity simulations to transform the original solution space into an ordinal space that encapsulates useful information from the low-fidelity model. The OS methodology efficiently uses high-fidelity simulations to sample the transformed space in search of the optimal solution. Through theoretical analysis and numerical experiments, we demonstrate the promising performance of the MOTOS framework (Multi-fidelity Optimization with Ordinal Transformation and Optimal Sampling)

08:30-10:00 Session 14E: Open Source Software II
Location: Salon G
08:30
Solving two-stage stochastic linear programming with stochastic decomposition (SD)
SPEAKER: Yifan Liu

ABSTRACT. Stochastic Programming (SP) has long been considered as a well-justified yet computationally challenging paradigm for practical applications. Computational studies in the literature often involve approximating a large number of scenarios by using a small number of scenarios to be processed via deterministic solvers, or running Sample Average Approximation on some genre of high performance machines so that statistically acceptable bounds can be obtained. In this talk, we show that for a class of stochastic linear programming problems, an alternative approach known as Stochastic Decomposition (SD) can provide solutions of similar quality, in far less computational time using ordinary desktop or laptop machines of today.

08:50
A Family of Exact Solvers for the Quadratic Assignment Problem
SPEAKER: Peter Hahn

ABSTRACT. The oldest and most widely studied quadratic 0-1 problem is the quadratic assignment problem, introduced by Koopmans and Beckmann in 1957, where N facilities must be assigned to N locations while minimizing the cost of transportation between facilities. While great progress has been made on generating good heuristic solutions to large and difficult instances, the only solvers that can efficiently solve difficult problem instances exactly on a single cpu are based on Hahn’s 1969 improvement to the well known Gilmore-Lawler bound. Improvements to Hahn's bound and a branch-and-bound algorithm are due to Grant, with help from Roucairol and Rendl. Hahn's bound is derivable from the Adams-Sherali reformulation-linearization technique (RLT). Hahn crafted the first RLT level-2 QAP exact solver with guidance from Hightower, Guignard and Adams.Recently, a parallelized branch-and-bound code based on RLT level 3 was developed by Roth along with Hahn, Hightower, Saltzman and Guignard. Each of these methods has advantages and strengths: the improved strength of higher-level bounds is offset by dramtically increased memory requirements.

We describe Hahn's FORTRAN code and Roth's C code implementing these ideas. These codes will be available through COIN-OR.

09:10
RLT-POS: Reformulation-Linearization Technique-based Optimization Software for Solving Polynomial Programming Problems

ABSTRACT. In this talk, we introduce a Reformulation-Linearization Technique-based open-source optimization software for solving polynomial programming problems (RLT-POS). We present algorithms and mechanisms that form the backbone of RLT-POS, including constraint filtering techniques, reduced RLT representations, and semidefinite cuts. When implemented individually, each model enhancement has been shown to significantly improve the performance of the standard RLT procedure. However, the coordination between different model enhancement techniques becomes critical for an improved overall performance since special structures in the original formulation that work in favor of a particular technique might be lost after implementing some other model enhancement. More specifically, we discuss the coordination between 1) constraint elimination via filtering techniques and reduced RLT representations, and 2) semidefinite cuts for sparse problems. We present computational results using instances from the literature as well as randomly generated problems to demonstrate the improvement over a standard RLT implementation and to compare the performances of the software packages BARON, COUENNE, and SparsePOP with RLT-POS.

09:30
A Two-Stage Partitioning Scheme for the Min-Max K Windy Rural Postman Problem
SPEAKER: Oliver Lum

ABSTRACT. We develop a two-stage heuristic for the min-max k windy rural postman problem. Our heuristic is simple algorithmically, easy to code, and adheres to a modular metaheuristic framework (i.e., any partitioning algorithm may be used in the assignment stage and any solver for the windy rural postman problem may be used in the routing stage). We report computational results over a set of test instances that we derive directly from real-world street networks. In addition, we provide an open source, arc-routing Java library that has a flexible graph architecture with solvers for a several uncapacitated arc-routing problems and the ability to dynamically generate and visualize actual street networks. The project is hosted at https://github.com/Olibear/ArcRoutingLibrary.

08:30-10:00 Session 14F: Homeland Security: Cyber and Disaster Models
Location: Salon H
08:30
Dynamic Modeling for Arctic Resource Allocation

ABSTRACT. Interest in energy prospects has increased the potential for an oil spill incident in the Alaskan Arctic. A dynamic network expansion problem with stochastic scenario considerations is proposed to assess oil spill response resource allocation policies. The focus is on addressing task lists required for all potential spills to improve the objective of weighted task completion times. Stochastic programming solution methods are employed, and observations and performance results are discussed.

08:50
Optimizing Forward Surgical Team Operating Room Scheduling for Mass Casualty Events

ABSTRACT. We solve the Forward Surgical Team (FST) Operating Room (OR) Scheduling Problem for Mass Casualty (MASCAL) events to determine the optimal OR surgical schedule for a single FST given deterministic, sequence-dependent OR setup times as to minimize the total length of the schedule by minimizing the total surgical setup time. We perform a two-part computational experiment to solve the problem of interest and compare four different methods (dynamic programming or DP, branch and bound or B\&B, integer linear programming or ILP, and direct permutation heuristic or DPH) in terms of computational performance. The results of experiment suggest that the B\&B performed best with the lowest computation times, while the DPH performed worst in terms of optimality gap. The DPH performed extremely well without time limitations and was easiest to implement using spreadsheet modeling, so we recommend its use for the future development of a decision-support tool for military medical planners.

09:10
A stochastic CVaR approach to disaster relief with social costs
SPEAKER: Amy Givler

ABSTRACT. In many disaster relief problem formulations, social cost, or costs incurred by the population suffering from the disaster, are neglected. This stochastic scenario-based formulation focuses on pre-positioning of points of distribution (PODs) while incorporating social costs in the form of walking costs. Additionally, the risk measure used for these social costs is conditional value at risk (CVaR), a convex monotonic risk measure that focuses on the expectation in the worst scenarios.

08:30-10:00 Session 14G: Optimization: Continuous Optimization and Factorizations
Location: Salon J
08:30
Complementarity Formulation for l0 Norm Minimization Problems
SPEAKER: Xin Shen

ABSTRACT. Recently l0 norm minimization has become a remarkably active area of research in optimization. The problem is NP-hard and replacing it by the l1 norm is a common way to get an approximate solution. We consider an alternative way of formulating the problem as a mathematical program with complementarity constraints. We further discuss the completely positive relaxation and compare the properties of various formulations, including stationary conditions, local and global optimality.

08:50
Roundoff-Error-Free Algorithms for Solving Linear Systems via Cholesky and LU Factorizations

ABSTRACT. LU and Cholesky factorizations play a central role in solving linear programs and several other classes of mathematical programs. In many documented cases, the roundoff errors accrued during the construction and implementation of these factorizations lead to the misclassification of feasible problems as infeasible and vice versa (Dhiflaoui et al. 2003). Hence, reducing these roundoff errors or eliminating them altogether is imperative to guarantee the correctness of the solutions provided by optimization solvers. In order to achieve this goal without having to utilize rational arithmetic, we introduce two roundoff-error-free factorizations that require storing the same number of elements and performing a similar number of operations as the traditional LU and Cholesky factorizations. Additionally, we present supplementary roundoff-error-free forward and backward substitution algorithms, thereby providing a complete tool set for solving systems of linear equations exactly and efficiently. An important property of the featured factorizations and substitution algorithms is that their individual coefficients’ word-length - the number of digits required for expression - is bounded polynomially. Unlike the rational-arithmetic methods used in practice to solve linear systems exactly, however, the algorithms herein presented do not require gcd calculations to bound the word-length. We also derive various other related theoretical results.

09:10
Studying Constrained Clustering Problems using Homotopy Maps

ABSTRACT. Many algorithms for constrained clustering have been developed in the literature that aim to balance vector quantization requirements of cluster prototypes against the discrete satisfaction requirements of constraint (must-link or must-not-link) sets. Significant research has been devoted to designing new algorithms for constrained clustering and understanding when constraints help clustering. However, no method exists to systematically characterize solution spaces as constraints are gently introduced and how to assist practitioners in choosing a sweet spot between vector quantization and constraint satisfaction. We present a homotopy method that can smoothly track solutions from unconstrained to constrained formulations of clustering. Beginning the homotopy zero curve tracking where the solution is (fairly) well-understood, the curve can then be tracked into regions where there is only a qualitative understanding of the solution space, finding multiple local minima along the way. Through experiments, we demonstrate how our homotopy method helps identify better tradeoffs and reveals insight into the structure of solution spaces not obtainable using pointwise exploration of parameters.

09:30
A regularized SQP method for degenerate equality-constrained optimization

ABSTRACT. In nonlinear optimization, failure of a constraint qualification leads to computational difficulties. In particular, the linear systems used to compute search directions become singular. We establish that appropriately-formulated augmented Lagrangian methods are intimately connected to stabilized sequential quadratic programming (SQP) methods in which all constraints are systematically regularized. Global and fast local convergence of our algorithm are guaranteed under rather general conditions. We show how the linear systems can be solved efficiently using direct or appropriate iterative methods. Numerical experiments are presented on a subset of the CUTEr collection.

10:30-12:00 Session 16A: Stochastic Optimization: Optimal Learning in Stochastic Search
Location: Salon A
10:30
Optimal Learning with Nonlinear Belief Models
SPEAKER: Warren Powell

ABSTRACT. The knowledge gradient maximizes the value of information from an experiment, which involves computing the expectation of a maximum of the posterior of a function following the experiment. This calculation becomes expensive when the belief model is nonlinear in the parameters. We describe the use of a model based on discrete priors which can be easily updated, but introduces discretization errors. Numerical work drawn from an application in materials science is reported.

10:50
Information Filtering for arXiv.org: Bandits, Exploration vs. Exploitation, and the Cold Start Problem
SPEAKER: Peter Frazier

ABSTRACT. We consider information filtering, in which we face a stream of items too voluminous to process by hand (e.g., scientific articles, blog posts, emails), and must rely on a computer system to automatically filter out irrelevant items. Such systems face the exploration vs. exploitation tradeoff, in which it may be beneficial to present an item despite a low probability of relevance, just to learn about future items with similar content. We present a Bayesian sequential decision-making model of this problem, and provide a decomposition technique that allows solving the resulting Markov decision process to optimality. We show that the resulting method is especially useful when facing the cold start problem, i.e., when filtering items for new users without a long history of past interactions. We then present initial results from an ongoing rollout of this information filtering method for an influential repository of scientific articles, arXiv.org.

11:10
A Dividing Rectangles Algorithm for Stochastic Simulation Optimization
SPEAKER: Paul Nicholas

ABSTRACT. Many important real-world optimization problems are stochastic, non-linear, non-differentiable, and non-convex. To solve these difficult problems, practitioners often rely on black-box optimization methods which make no or few assumptions of internal problem structure. The Dividing Rectangles (DIRECT) algorithm has proven popular for deterministic, low-dimensional black-box problems. The algorithm does not require calculation of a gradient, is relatively simple (having only one tuning parameter), and is widely applicable, requiring only that the function be Lipschitz continuous. We describe a new DIRECT algorithm to support stochastic simulation optimization that retains the global convergence properties of the original. Our algorithm iterates between searching for new solutions and reducing the variance of existing solutions using Bayesian sample information. The algorithm intelligently allocates available computational resources using the Optimal Computing Budget Allocation (OCBA) methodology, and includes several heuristic mechanisms which in practice increase the likelihood of quickly obtaining desirable solutions. In computational experiments, we find this new algorithm can provide better performance than the only other known modification of DIRECT for stochastic problems. We provide the results of a realistic application where we use the algorithm to design a wireless mesh network supporting uncertain traffic demands.

11:30
Optimal Resource Allocation with Semi-Bandit Feedback
SPEAKER: Tor Lattimore

ABSTRACT. We study a sequential resource allocation problem involving a fixed number of recur- ring jobs. At each timestep the manager should distribute available resources among the jobs in order to maximise the expected number of completed jobs. Allocating more resources to a given job increases the probability that it completes, but with a cut-off. Specifically, we assume a linear model where the probability increases linearly until it equals one, after which allocating additional resources is wasteful. We assume the difficulty of each job is unknown and present the first algorithm for this problem and prove upper and lower bounds on its regret. Despite its apparent simplicity, the problem has a rich structure: we show that an appropriate optimistic algorithm can improve its learning speed dramatically beyond the results one normally expects for similar problems as the problem becomes resource-laden.

10:30-12:00 Session 16B: Health Applications: Pharmaceutical
Location: Salon B
10:30
Optimization Model for Ambulance Redeployment with Fair Workload Consideration of EMS Providers

ABSTRACT. Available ambulances are redeployed dynamically to compensate for the loss in coverage due to unavailability of busy vehicles. We propose an optimization model to find the best redeployment policy that maximizes the coverage while keeping the additional workload of EMS providers due to relocation moves below a certain threshold. We study and verify the performance of our approach on a real-data set from Mecklenburg County, NC.

10:50
Optimal Design of the Annual Influenza Vaccine with Manufacturing Autonomy
SPEAKER: Osman Ozaltin

ABSTRACT. Seasonal influenza is a major public health concern, and the first line of defense is the flu shot. We propose a bilevel multistage stochastic mixed-integer program that maximizes societal benefit of the flu shot under autonomous profit maximizing manufacturers. Calibrated over publicly available data, our model returns the optimal flu shot composition and timing in a stochastic and dynamic environment. We derive analytical results, and perform extensive numerical experiments.

11:10
Robust optimization with multiple ranges: theory and application to pharmaceutical project selection

ABSTRACT. We present a robust optimization approach when the uncertainty in objective coefficients is described using multiple ranges for each coefficient. This setting arises when the value of the uncertain coefficients, such as cash flows, depends on an underlying random variable, such as the effectiveness of a new drug. Traditional robust optimization with a single range per coefficient would require very large ranges in this case and lead to overly conservative results. In our approach, the decision-maker limits the number of coefficients that fall within each range; he can also limit the number of coefficients that deviate from their nominal value in a given range. This is particularly useful for the manager of a pharmaceutical company who aims at constructing a portfolio of R&D projects, or drugs to be developed and tested for commercialization. Modeling multiple ranges requires the use of binary variables in the uncertainty set. We show how to develop tractable reformulations using a concept called total unimodularity and apply our approach to a R&D project selection problem when cash flows are uncertain. Furthermore, we develop a robust ranking heuristic, where the manager ranks the projects according to densities, while incorporating the budgets of uncertainty but without requiring any optimization procedure, to help the manager gain insights into the solution. .

10:30-12:00 Session 16C: Integer Programming: Recent Developments in Integer Programming I
Location: Salon D
10:30
Enhancing the reliability of MIP branching decisions by using sample variance of pseudo costs
SPEAKER: Gregor Hendel

ABSTRACT. The selection of a branching variable is a crucial ingredient for general purpose Mixed Integer Programming solvers. In reliability branching, a history-based estimate of the expected gain of a variable is obtained through a fixed number of strong-branching look-aheads on this variable, before resorting to pseudo cost estimations. Such fixed-number thresholds may lead to ineffective branching decisions on problems with varying objective gains. We address the issue of determining a reliability threshold by taking into account the sample variance of past observations for each variable individually. We present computational results with the MIP-solver SCIP.

10:50
Using and reusing dual information in MILP
SPEAKER: Imre Pólik

ABSTRACT. We present several ways to use and reuse dual information in the context of mixed-integer linear programming. We start from the basic reduced cost fixing and progress to new methods to tighten and fix variables. The methods are quick, easily implemented in a branch and bound tree and provide extra tightenings even in the absence of an incumbent solution or a trivial upper bound. We have implemented the techniques in the SAS/OR mixed-integer optimization solver and present detailed computational experiements to illustrate their efficiency.

11:10
The worst case analysis of strong knapsack facets
SPEAKER: Sangho Shim

ABSTRACT. In this presentation we identify strong facet defining inequalities for the master knapsack polytope. Our computational experiments for small master knapsack problems show that 1/k-facets for small values of k (k <= 4) are strong facets for the knapsack polytope. We then show that these facets are strong in the sense that their removal from the master knapsack polytope signicantly weakens the resulting LP-relaxation in the worst case. We show that the 1/k-facets for k = 1 or 2 are the strongest in that their removal from the master knapsack polytope weakens the LP-relaxation by a factor of 3/2 in the worst case. We then show that the 1/k-facets with k = 3 or 4 are the next strongest. We also show that the strength of the 1/k-facets weakens as k grows. In general we are able to show that the 1/k-facets with k even are stronger than the 1/k-facets with k odd.

11:30
On the structure and separation of knapsack facets
SPEAKER: Sunil Chopra

ABSTRACT. Through the use of shooting experiments and worst case analysis Shim, Cao and Chopra have demonstrated the strength of 1/k-facets for the master knapsack polytope. The 1/k-facets are facets whose coefficients are multiples of 1/k, and they are particularly strong for smaller values of k. In this work we study the structure of the 1/3 and 1/4-facets. For the general integer knapsack problem, or knapsack sub-problem, with l variables, the number of 1/k-facets is O(l^(k/2)). We describe algorithms for separating 1/k-facets for the general integer knapsack problem for some small values of k. Additionally, we draw new insights from additional shooting experiments, including the observation that facets arising from Gomory's homomorphic lifting are strong.

10:30-12:00 Session 16D: Data Mining: Predictive Models
Chair: Eric Laber
Location: Salon E
10:30
Data-Driven Decision Strategies for Large Spatio-Temporal Decision Problems
SPEAKER: Eric Laber

ABSTRACT. Large spatio-temporal decision problems are prevalent in science, medicine, technology, and security. Estimation of an optimal decision strategy in spatio-temporal decision problems using either experimental or observational data is a fundamental and tremendously important problem; e.g., an effective strategy for controlling the spread of white-nose syndrome in bats has the potential to save billions of dollars per annum in agriculture costs. Nevertheless, a rigorous, scalable statistical framework for estimation of and inference for an optimal decision strategy in large spatio-temporal decision problems is lacking. Furthermore, an effective statistical framework must be able to account for interference induced by spatial dependence, incorporate evolving cost and logistical constraints, and produce interpretable results that can be scrutinized by domain experts. We propose an estimation framework that adaptively combines simulation-optimization methods based on postulated systems dynamics models with semi-parametric estimators built from reinforcement learning policy-search algorithms. The proposed framework combines these two approaches to balance bias and variance in such a way that expected performance is optimized. We illustrate the proposed method using data from the spread of white-nose syndrome.

10:50
High Peformance Optimization for Large-Scale Dense SVM and Quantile Regression
SPEAKER: Yan Xu

ABSTRACT. SVM classifiers and Quantile Regression require solutions to large-scale quadratic optimization problems. In special cases, such as linear or low-degree polynomial kernels, a low-rank factorization of the Hessian is available. Such factorizations create equivalent problem formulations amenable to parallel computing environments via interior point methods. By exploiting the SAS High Performance computing infrastructure, dense problems with billions of observations are efficiently solved.

10:30-12:00 Session 16E: Heuristics II
Location: Salon G
10:30
Building Design Multi-Objective Optimization with Active Learning

ABSTRACT. Residential and commercial buildings are responsible for about 40\% of primary energy consumption in the United States, hence improving their energy efficiency can have important sustainability benefits. The design of a building has tremendous effect on its energy profile, and recently there has been an increased interest in developing optimization methods that support the design of high performance buildings. Previous approaches are either based on simulation optimization or on training an accurate predictive model that is queried during the optimization. Building Design optimization falls in the broad and important family of applications, which involve multi-objective optimization in large design spaces with expensive evaluations. We propose a method that more tightly integrates the machine learning and optimization components, by employing active learning during optimization. In particular, we use a Gaussian Process (GP) model for the prediction task, and multi-objective genetic algorithm NSGA-II for the optimization interleaved with GP active learning based on variance reduction. We develop a comprehensive and publicly available benchmark for building design optimization, with a design space exceeding 10^9. We evaluate 5 machine learning approaches on our dataset, and show that the GP model is a competitive, in addition to being well-suited for the active learning setting. We compare our optimization approach against the 2-stage approach and simulation optimization. Our results show that our approach produces solutions at the Pareto frontier compared to the other two approaches, while using only a fraction of the simulations and time.

10:50
Heuristic Solution Approaches for the Maximum MinSum Dispersion Problem

ABSTRACT. The Maximum Minsum Dispersion Problem (Max-Minsum DP) is a strongly NP-Hard problem that belongs to the family of equitable dispersion problems. When dealing with dispersion, the operations research literature has focused on optimizing efficiency-based objectives while neglecting, for the most part, measures of equity. The most common efficiency-based functions are the sum of the inter-element distances or the minimum inter-element distance. Equitable dispersion problems, on the other hand, attempt to address the balance between efficiency and equity when selecting a subset of elements from a larger set. The objective of the Max-Minsum DP is to maximize the minimum aggregate dispersion among the chosen elements. We develop tabu search and GRASP solution procedures for this problem and compare them against the best in the literature. We also apply LocalSolver, a commercially available black-box optimizer, to compare our results. Our computational experiments show that we are able to establish new benchmarks in the solution of the Max-Minsum DP.

11:10
A Framework for the Systematic Evaluation of Computational Methods

ABSTRACT. Papers describing heuristics for NP-Hard problems often use different computers and termination criteria for experiments, making it difficult to compare between heuristics. Standard test beds are often small and homogeneous; given the range of applications for many NP-hard problems, these test beds may be insufficient to identify the best-performing heuristic for a new problem instance. We propose an approach that addresses these issues, testing our methodology with the MAX-CUT problem. We identified 37 highly cited heuristics for the MAX-CUT and Quadratic Unconstrained Binary Optimization problems, implementing each heuristic using shared data structures and code wherever possible. We also built a large, heterogeneous test bed of approximately 3,200 problem instances from the literature and with random graph generators. With uniform, publicly available cloud computing hardware, we ran each heuristic on each graph using the same graph-specific runtime limit. By relying on cloud infrastructure, this approach enables others to test new heuristics on the same graphs and hardware without re-implementing or re-testing the other 37 heuristics. Using 91 quick-to-compute graph properties, we predict the best heuristic for a previously unseen problem instance, identifying the best approaches for different types of instances and creating a hyper-heuristic that outperforms any single heuristic.

10:30-12:00 Session 16F: Network Applications I
Location: Salon H
10:30
Network Design with Compression: Complexity and Algorithms
SPEAKER: Arie Koster

ABSTRACT. Recent advances in communication technology allow to compress data streams in communication networks by deploying physical devices (caches) at routers, yielding a more efficient usage of link capacities. This gives rise to the network design problem with compression (NDPC), a generalization of the classical Network Design problem. In this paper, we compare both problems, focusing on the computational complexity and analyzing the differences induced by the compression aspect.

We show that the subproblem of adding compression, i.e., the compressor placement problem (CPP), is already weakly NP-hard, even on instances where Network Design alone is easy. We conclude with a pseudopolynomial algorithm for tree instances and a restricted polynomial case.

10:50
Maximum-Demand Rectangular Location Problem
SPEAKER: Manish Bansal

ABSTRACT. We introduce the Maximum-Demand Rectangular Location Problem (MDRLP) as a generalization of the planar maximum coverage location problem with rectilinear distance. MDRLP provides a flexible framework to model continuous demand over an area and partial coverage of demand zones. MDRLP is the problem of positioning p rectangular service zones (SZs) on two-dimensional plane to partially cover a set of existing rectangular demand zones to maximize total covered demand. This problem has important applications in facility location and camera control. We show that MDRLP is NP-hard if the number of SZs is part of the input. We then present an improved algorithm for the single-SZ MDRLP, which is at least two times faster than the existing Plateau Vertex Traversal algorithm. Next, we study multi-SZ MDRLP for the first time and prove theoretical properties which significantly reduce the search space for solving this problem and present an exact algorithm to solve it. Our computational experiments with our exact algorithm show that this algorithm can solve relatively large instances of the problem in a short time. Having this exact algorithm, we can find optimal solutions to the problem and use them to benchmark the performance of faster heuristic algorithms for MDRLP.

11:10
Weighted Target Set Selection
SPEAKER: Rui Zhang

ABSTRACT. The Target Set Selection (TSS) problem is a fundamental problem about the diffusion of influence in social networks. Starting with Kempe, Kleinberg, and Tardos (2003) it has been widely studied in the literature. The TSS problem has applications in many areas in economics, business, sociology and medicine including viral marketing and epidemiology. In our work, we consider the weighted version of it (the WTSS problem). The weights in the WTSS problem model the fact the cost or effort to activate different nodes can vary. The TSS problem is known to be NP-hard, and earlier work on the TSS problem has focused on approximation. Motivated by the desire to develop mathematical programming approaches to solve the WTSS problem, we focus on developing a strong formulation for the WTSS problem. We present a tight and compact extended formulation for the WTSS problem on trees. In other words, we provide an extended linear programming formulation whose projection onto the space of the natural node selection variables is the convex hull of feasible target set vectors. Furthermore, based on this strong formulation, a branch and cut approach is proposed for general networks. Computational results will be presented.

10:30-12:00 Session 16G: Optimization: Integer Programming, Networks, and Constraint Programming
Location: Salon J
10:30
Continuous and Discrete Time Label Setting Algorithms for the Time Dependent Bi-criteria Shortest Path Problem

ABSTRACT. In this paper we address the bi-criteria shortest path problem in a network with time-dependent cost functions, a travel time constraint and zero waiting time at nodes. We propose a discrete time forward label setting algorithm which outperforms other approaches in the literature. We also propose a second, more general, forward label setting algorithm that can handle both discrete and continuous travel time functions. A speeding up technique for the proposed algorithms is also provided. The algorithms are compared with a backward algorithm, the best so far algorithm in the literature that solves the problem at hand and the experimental results confirm the higher efficiency of the proposed algorithms.

10:50
Efficient storage of Pareto solutions in biobjective mixed integer programming

ABSTRACT. Many of the techniques for solving biobjective mixed integer linear programs (BOMILP) are iterative processes which utilize solutions discovered during early iterations to aid in the discovery of improved solutions during later iterations. Thus, it is highly desirable to efficiently store the nondominated subset of a given set of solutions. To this end, we present a new data structure in the form of a modified binary tree. The structure takes points and line segments as input and stores the nondominated subset of the input. We perform two computational experiments. The results of the first show that this structure processes inserted data faster than alternative structures currently implemented in the literature. Results of the second experiment show that when our structure is utilized inside fathoming procedures for biobjective branch-and-bound (BB), the running times for BB are reduced in most cases.

11:10
A Formal Model for Optimizing Dynamic Service Composition
SPEAKER: John Mcdowall

ABSTRACT. Run-time service composition has been a goal of the Service-Oriented Architecture paradigm for many years, and several projects have demonstrated means of doing this composition. However, none of these efforts has focused on optimizing the composition of available services to ensure the user is getting a composition that meets his or her needs or falls within constraints that the user must conform to. This paper describes a model for optimizing service composition, beginning with a formal definition of the service composition problem and culminating in an implementation using the Optimization Programming Language.

11:30
Hybrid Decomposition Approach for Large Scale Supply Chains Production-Distribution Coordination

ABSTRACT. Nowadays industrial Supply Chains coordinate multiple decision making issues, including production-distribution models, strategic-planning models and planning-scheduling problems. An industrial gases supply-chain coordinates production/distribution decisions at multiple plants/depots while satisfying several customer demands. Plants operate air separation units with high electricity consumption to produce gaseous and liquid products. Distribution is carried out by pipeline for gaseous products and by truck for liquid products. Moreover, trucks at a given depot can deliver product from multiple sources. As the number of sources and customers increase, the selection of the possible routes to be included in the formulation becomes a critical issue. Previous work proposed a mixed-integer linear programming (MILP) formulation to minimize the overall production and distribution cost of industrial gases supply-chains. While previous model provides optimal production-distribution decision making, the computational effort required is still an issue to consider when industrial size examples are studied. This work describes a hybrid Lagrangean-rolling horizon decomposition approach. The Rolling horizon framework is used to provide rigorous lower bounds for the temporal decomposed problems, providing feasible solutions of the upper bound of the Lagrangean approach. Test case example show how the implementation is proved and better significant reduction in the computational burden can be obtained.

10:30-12:00 Session 16H: Homeland Security: Cyber Security and Intelligence
Chair: Gio Kao
Location: Rappahannock
10:30
Maximal coverage problems for reducing cyber-security vulnerabilities
SPEAKER: Laura McLay

ABSTRACT. This talk will discuss a cyber-security planning application for securing global information technology (IT) supply chain from the myriad of cyber-security risks and vulnerabilities that exist. We propose a budgeted maximal covering problem for selecting cyber-security mitigations and will introduce integer programming models and greedy approximation algorithms for identifying optimal and near-optimal solutions.

Note: This talk should be put into Gio Kao's session in the Homeland Security cluster

10:50
Supply Chain Risk Assessment Decision Framework
SPEAKER: Gio Kao

ABSTRACT. The globalization of today’s complex supply chain, where raw materials transform into end products, generates a complex and dynamic supply and demand network flow of information, services, products, and funds throughout the lifecycle phases of a system.

These semi-structured and loosely organized networks of complex systems provide an abundance of opportunities for adversaries to conduct cyber-physical attacks on our supply chains. The emerging security threat could degrade the integrity and availability of sensitive and critical government data, systems, and infrastructures. COTS and even GOTS products are often designed, developed, and manufactured overseas. Counterfeit items have been found both in commercial and government sectors. Supply chain attacks can be initiated at any point during the product or system lifecycle, and can have catastrophic outcomes such as loss of life, mission failure, economic disaster, social and political turmoil.

This work discusses the development of a supply chain risk assessment decision framework that will assist decision makers and stakeholders in performing risk-based cost-benefit prioritization of security investments to manage supply chain risk. Key aspects of our framework include the hierarchical supply chain representation, vulnerability and mitigation modeling, risk assessment and optimization. This work is a part of a long term research effort on supply chain decision analytics for trusted systems and communications research challenge.

11:10
Information Retrieval Customized for Intelligence Collection

ABSTRACT. Intelligence processors are faced with a glut of information. Typically, their job consists of sifting through thousands of intelligence reports, to compile data relevant to their query. We create custom information-retrieval algorithms for this intelligence collection setting. Unlike modern search engines, our algorithms exploit knowledge of the interest, experience, and organization of the user to guide their search for relevant information. The algorithms we describe show excellent theoretical, and simulated performance. We describe an ongoing effort to test the algorithms with human users.

13:00-14:15 Session 18: Tutorial Talk II
Location: Salon C
13:00
Solution Techniques for Large, Difficult Weighted Graph Problems and Their Use in an Important Policy Environment
SPEAKER: Karla Hoffman

ABSTRACT. Within the next year, the Federal Communications Commission will be holding the first ever ”incentive auction” in which a government asks TV station to relinquish rights for payment and then - having obtained interference-free spectrum, sells the acquired spectrum to the wireless communications industry. The auction is expected to be the biggest spectrum auction ever with estimates that it will generate far more revenue than 38 billion dollars that the 2014 FCC Advanced Wireless Service (AWS) auction yielded. The proposed multi-round/multi-stage auction requires the solution of over a million satisfiability problem where each problem indicates whether or not it is possible to pack a given set of stations into a given bandwidth of spectrum. Besides these important problems, optimization has played an important role in informing policy on the impact of specific auction rules, on setting the amount of spectrum that will be auctioned, on determining the final channel assignments for the auction and on determining the sequencing of the relocation of TV stations to new channel assignments after the auction ends. The structure of each of these problem is a generalization of the weighted graph coloring problems. The problems have hundreds of thousands of variables and the constraint set can exceed 3 million rows. We will present the various approaches that were used to obtain optimal and near-optimal solutions to these difficult problems: both heuristics and decomposition methods. At the conclusion, we will provide information about accessing both the publicly-available datasets and the mathematical formulations for all problems. This work is in collaboration with James Costa, Tony Coudert, Brian Smith, and Rudy Sultana of Computech, Bethesda, Maryland, and Michael Trick, Carnegie Mellon University.

14:30-16:00 Session 19A: Stochastic Optimization VII
Location: Salon A
14:30
Tight second-stage MIP formulations for two-stage stochastic mixed integer programs
SPEAKER: Manish Bansal

ABSTRACT. We study the two-stage stochastic mixed integer programs (SMIPs) and show how a tight (extended) formulation of a general second-stage MIP can be utilized to develop strong cuts (a priori) for the pure integer first stage of the problem. As a special case, we consider a SMIP which has a generalization of the well-known continuous mixing set in the second stage. This formulation is closely related to the two-stage stochastic formulation of the discrete capacitated lot-sizing problems (DLS) with uncertain demands (or costs). We then use our results to develop valid inequalities for these stochastic DLS problems and present our computational results.

14:50
A computational study of a Benders algorithm for solving two-stage stochastic mixed-integer linear programs

ABSTRACT. This presentation focuses on solving two-stage stochastic programs with first stage mixed integer decision variables using an integer Benders decomposition algorithm. We report a computational study on a large set of benchmark problems and discuss our implemented cut management system: (i) cut aggregation, (ii) cut consolidation, (iii) cut recruit, and (iv) mixed integer rounding (MIR) strengthen. A practical algorithm based on this method is implemented in a software package named iOptimize. We report computational results for a wide range of benchmark problems from the POSTS, SIPLIB, Test-Problem Collection, and a collection of difficult test problems compiled by us.

15:10
Dynamic routing problem under uncertain demands and travel times

ABSTRACT. We consider the dynamic routing problem under uncertain demands and travel times. The objective is to minimize the routing cost and the waiting time to service dependent demands. Compared to the static case where routes are determined a priori and corrective decisions to adjust them (recourses) are allowed as uncertainty in the parameters is revealed to decision-makers sequentially over multiple steps (time periods), the dynamic routing problem considers online route design. Thus, instead of designing robust routes that would be subject to the minimum number of changes/cost during their execution, the goal here is to build routes exploiting the stochastic nature of the information revealed dynamically during the online routing decision process. In this paper, we present the mathematical formulation for this problem and computational study on benchmark instances.

15:30
Risk averse Shortest path problems: a computational study

ABSTRACT. We study a variant of the shortest path problem where there is uncertainty on the travel time and the objective function is a risk measure. In general,we cannot use classical shortest path algorithms to solve our problem as it loses the good properties needed to use existing polynomial time algorithms. We used several decomposition techniques to solve the problem with CVAR and the entropic risk measures. We present computational results on artificial and real networks.

14:30-16:00 Session 19B: Health Applications: Disease Modeling
Location: Salon B
14:30
A Mathematical Model to Estimate the Health Benefits and Costs of Diabetic Retinopathy Screening Programs

ABSTRACT. Diabetic retinopathy is a leading cause of blindness in the US, and current guidelines recommend annual ophthalmologic screening for all patients with diabetes. Compliance with screening among diabetic patients remains poor despite the availability of vision-preserving treatments, possibly because patients remain asymptomatic until the late stages of the disease and because current screening programs require a visit to a specialist. The inconvenience and costs of screening in the specialist setting have motivated interest in alternative screening programs that can be implemented in primary care settings. A discrete-time Markov model was developed to estimate the health benefits and costs of hypothetical new diabetic retinopathy screening programs. The model tracks the underlying epidemiology of diabetic retinopathy and accounts for the influence of screening and subsequent treatment on progression and vision loss. Screening attributes considered by the model include setting, frequency, level of retinopathy targeted, accuracy, failure rate, and compliance. Data to parameterize the model were obtained from publicly available sources, and the model was validated with a clinician and against published clinical and economic studies. The health benefits, costs, and cost-effectiveness of hypothetical new screening programs were estimated in comparison with current screening programs.

14:50
A Hybrid Agent-Based and DE/Probabilistic Model for Simulating Antibiotic Resistance in a Hospital Ward
SPEAKER: Barry Lawson

ABSTRACT. Serious infections due to antibiotic-resistant bacteria are pervasive, and of particular concern within hospital units due to frequent interaction among health-care workers and patients. Such nosocomial infections are difficult to eliminate because of inconsistent disinfection procedures and frequent interactions among infected persons, and because ill-chosen antibiotic treatment strategies can lead to a growth of resistant bacterial strains. Clinical studies to address these concerns have several issues, but chief among them are the effects on the patients involved. Realistic simulation models offer an attractive alternative. In this talk, we discuss a hybrid simulation model of antibiotic resistant infections in a hospital ward, combining agent-based simulation to model the inter-host interactions of patients and health-care workers with a detailed differential equations and probabilistic model of intra-host bacterial and antibiotic dynamics. We present initial benchmarking results demonstrating realistic behavior of the model, and we discuss ongoing work to extend the model, incorporating immune response modeling.

15:10
Agent-Based Mediation Modeling of a Controlled Trial to Reduce the Transmission of Multidrug-Resistant Organisms
SPEAKER: Sean Barnes

ABSTRACT. In 2012-2013, the University of Maryland School of Medicine led a 20-site randomized controlled trial to assess the benefits of universal gloves and gowns to reduce the transmission of multidrug-resistant organisms (MDROs) in acute-care hospitals. The use of universal gloves and gowns resulted in a statistically significant decrease in the acquisition rate for Methicillin-resistant Staphylococcus aureus, but the effect of the intervention, relative to secondary effects such as the hand hygiene compliance of healthcare workers and the visit rates to patients, was unclear. We develop an agent-based model and, after extensive calibration efforts, we use this model to estimate the reduction in MDRO transmission directly due to this intervention.

14:30-16:00 Session 19C: Integer Programming: Recent Developments in Integer Programming II
Location: Salon D
14:30
Sharing pseudocosts among branch-and-bound trees

ABSTRACT. Pseudocost branching is an effective and low-cost branching rule for branch-and-bound in mixed integer linear programming. It consists in keeping historic data on the changes of the current dual bound when branching on a particular variable. This data is then used as a score to decide on which variable to branch next.

We propose a scheme in which different branch-and-bound trees, solving a common instance in parallel, are allowed to share pseudocost information and use it to branch. We also present a computational study that shows the effect of this approach on instances from MIPLIB 2010.

14:50
New Formulation and Valid Inequalities for the DC Transmission Switching Problem
SPEAKER: Burak Kocuk

ABSTRACT. As the modern transmission control and relay technologies evolve, transmission line switching has become an important option in power system operators' toolkits. Much research has focused on developing heuristic algorithms and applications in crucial operational problems such as contingency correction and real-time dispatch. However, the mathematical theory of the underlying optimization problem has not been well studied. In this work, we first prove some complexity results. In particular, we show transmission switching is NP-hard even on a parallel-serial network. Then, we propose a new formulation for the transmission switching problem with DC power flow models. We characterize the convex hull of a certain important substructure of the new formulation, and use it to generate valid inequalities that strengthen the LP relaxation of the model. We further develop a practical implementation of the proposed new model. Extensive computational experiments show promising results.

15:10
On Some Generalizations of Split Cuts
SPEAKER: Diego Moran

ABSTRACT. Split cuts form a well-known class of valid inequalities for mixed-integer programming problems (MIP). Cook et al. (1990) showed that the split closure of a rational polyhedron P is again a polyhedron. In this paper, we extend this result from a single rational polyhedron to the union of a finite number of rational polyhedra. We also show how this result can be used to prove that some generalizations of split cuts, namely cross cuts, also yield closures that are rational polyhedra.

15:30
Accelerating Scenario Decomposition Algorithms Using Forbidden Neighborhoods and Local Search
SPEAKER: Kevin Ryan

ABSTRACT. A recently proposed scenario decomposition algorithm for stochastic 0-1 programs finds an optimal solution by evaluating and removing individual solutions discovered by solving scenario subproblems. In this work, we seek to evaluate and remove neighborhoods which are constructed using groups of solutions discovered in the process of solving scenario subproblems. This method is tested on on well known stochastic 0-1 programs and large-scale stochastic optimization problems from the electrical grid.

14:30-16:00 Session 19D: Data Mining: Model and Feature Selection
Location: Salon E
14:30
A Packing Approach to High Dimensional Inference
SPEAKER: Kai Zhang

ABSTRACT. It is important to understand the impact of dimensionality on statistical methods. We provide some asymptotic results in this respect by studying the following two questions: (1) What is the distribution of the maximal spurious correlation? (2) How can we make use of this distribution to design inference methods that overcome the difficulties due to spurious correlations? To answer these questions, we study high-dimension asymptotics with a perspective from spherical cap packing, i.e., we view a dataset with n observations of p variables as p vectors in n dimensions. The advantage of this approach is that it enables us to investigate the above questions for the entire scope of p and n. Through this perspective, we find (1) a sharp uniform Bonferroni bound on spurious correlations for an arbitrarily high p, as well as related uniform limiting distributions when the bound is attained; and (2) a sharp uniform Bonferroni bound on a low-rank elliptical distributed vector, as well as related uniform limiting distributions when the bound is attained. With these results, we develop a global test of model significance in Gaussian linear models with an arbitrarily high p and an inference method for a low-rank in high dimensional Gaussian data.

14:50
Simulation Selection for Empirical Model Comparison
SPEAKER: Qiong Zhang

ABSTRACT. We propose an efficient statistical method for the empirical model comparison, which is typically referred to as a simulation procedure to evaluate a large number of machine learning algorithms. We study the problem from two perspectives. First, we use experimental designs to construct the testing set and training set that are used to efficiently estimate the model performances. Second, we use ranking and selection to optimally allocate the simulation budget. We will show numerical results involving a large number of data sets and machine learning algorithms to demonstrate the advantage of our method.

15:10
An averaged confidence interval after model selection
SPEAKER: Ganggang Xu

ABSTRACT. The naive confidence interval after model selection usually surfers from under-coverage since it does not take into account the model selection uncertainty. Several proposals have been developed recently to correct the under-coverage of the naive interval. These proposals typically adjust the interval endpoints by using perturbation or re-sampling. In the paper, we propose a new post-model-selection confidence interval based on a model averaging strategy. The averaging weight used serves as the tuning parameter and is chosen using a residual bootstrap. A simulation study shows that the proposed method is competitive to existing methods in finite sample size and achieve a good balance of giving good coverage probability and having shorter interval length.

15:30
Generation and Selection of Relational Attributes

ABSTRACT. Typically, classification and regression problems need a flat table as input. However, such table is rarely immediately available, but it is typically constructed by summarizing the information stored in a relational database into “relational attributes”. The process of preparing this flat table is critical, because the quality of the results obtained depend on which attributes are included in it. In this talk, we discuss how to automatically build these relational attributes and how to select the ones with the highest predictive power.

14:30-16:00 Session 19E: Heuristics III
Location: Salon G
14:30
Algorithms for Top-K Scheduling Queries Based on Partial Search Space for Discrete Decision Variables Based in Enumeration Heuristics
SPEAKER: Susan Farley

ABSTRACT. Scheduling problems are part of every day life. In this paper, we define a determinist scheduling problem and propose a set of algorithms for optimizing a deterministic schedule. We also show experimental results based on applying these algorithms to rescheduling passengers for the airline industry.

14:50
Two dimensional load balancing
SPEAKER: Kangbok Lee

ABSTRACT. We consider a bi-criteria parallel machine scheduling problem in which the first objective is the minimization of the makespan of the schedule and the second objective is the minimization of the maximum machine cost. Since the problem is strongly NP-hard, we propose a fast heuristic and derive its worst-case performance bound.

15:10
Very Large-Scale Neighborhood Search for Generalized Assignment Problems with Location/Allocation Considerations
SPEAKER: Tulay Flamand

ABSTRACT. We address a generalized assignment problem with location/allocation considerations that arises in retail shelf space allocation and beyond. Here, the reward incurred for assigning an item to a bin also depends on its specific location in the bin and the total space it is allocated within lower/upper bounds. We propose an optimization model that is embedded in an effective Very Large-Scale Neighborhood Search which provides near-optimal solutions to computationally challenging instances.

14:30-16:00 Session 19F: Network Applications II
Location: Salon H
14:30
A Benders Decomposition Algorithm for a Hub-Network Design Problem
SPEAKER: Halit Uster

ABSTRACT. We consider multi-commodity flow network design problem that combines and extends certain features of hub location and network design problems with generalized installation costs and capacity decisions. For model solution, we present a Benders Decomposition based algorithm with strengthened Benders cuts obtained via a two-phase approach in which the phase 1 problem is a reduced dual subproblem solved by an LP solver and the phase 2 problem, containing partially fixed dual values, is solved by an efficient algorithm.

We illustrate the efficiency of our overall approach in both solution time and quality measures on a set of randomly generated instances representing number of potential locations and commodities up to 40 nodes and 10,000 origin-destination pairs with average and maximum solution times not exceeding 2000 and 4900 seconds, respectively.

14:50
Loss-Constrained Minimum Cost Flow under Arc Failure Uncertainty with Applications in Risk-Aware Kidney Exchange
SPEAKER: Siqian Shen

ABSTRACT. We study a stochastic minimum cost flow (SMCF) problem under arc failure uncertainty, where an arc flow solution may correspond to multiple path flow solutions. The failure of an arc will cause flow losses on all paths using that arc. We formulate two SMCF variants to minimize the total flow cost, while respectively restricting the value-at-risk (VaR) and conditional value-at-risk (CVaR) of random path flow losses computed via a linear program. We present a kidney exchange problem under uncertain match failure, and generalize the two SMCF models to maximize the utility/social welfare of pairing kidneys subject to constrained risk of utility losses.

15:10
A Mathematical Programming Framework for Detecting Sybil Communities in Online Social Networks

ABSTRACT. Over the past few years, online social networks have become one of the most important tools for our daily life interaction. However, as social networks get bigger and more widespread, they also become more prone to several security threats, such as Sybil node attacks. In its most basic form, a Sybil node attack is executed by a perpetrator who creates multiple fake identities (Sybil nodes) and tries to form bonds with real users (honest nodes) in order to gain power and influence over the social network. This additional influence allows the perpetrator to anonymously perform dishonest activities, like dispersing information and rumors, affecting the results of elections, and altering the outcome of polls and advertising campaigns. Because of the negative impact that Sybil nodes can cause to a social network, identifying and deleting such nodes is nowadays an endeavor of central importance for any social network administrator. In this study, we propose a mathematical programming framework designed to detect Sybil communities. We identify some properties that are often shared by most of these communities and incorporate them as additional constraints within the proposed formulations. We also present several preprocessing approaches for reducing the computational time required for solving these problems.

14:30-16:00 Session 19G: Tutorial on Computational Probability
Chair: Xi Chen
Location: Salon J
14:30
Computing in Probability

ABSTRACT. Statistical packages that process data, such as R, JMP, and SAS, are ubiquitous. A more difficult problem is to manipulate random variables which have arbitrary probability distributions. This tutorial considers the development of algorithms that address probability problems. APPL (A Probability Programming Language) implements these algorithms to perform operations on random variables to address probability problems. The applications considered in the tutorial are the infinite bootstrap, the probability distribution of the Kolmogorov-Smirnov test statistic, the distribution of the time to complete a stochastic activity network, finding a lower bound on system reliability, Benford's law, finding the probability distribution and variance-covariance matrix of sojourn times in a queueing model, probability distribution relationships, testing random numbers, bivariate transformations, and time series models.

16:00-16:30 Session 20: Coffee Break
Location: James River Prefunction Area
16:30-18:00 Session 21: Plenary Talk II
Location: Salon C
16:30
Free Software and Your Freedom

ABSTRACT. Richard Stallman will speak about the goals and philosophy of the Free Software Movement, and the status and history of the GNU operating system, which in combination with the kernel Linux is now used by tens of millions of users world-wide.

17:00-21:00 Session 22: General Reception
Location: James River Prefunction Area