IOS2020: INFORMS OPTIMIZATION SOCIETY CONFERENCE 2020
PROGRAM FOR MONDAY, MARCH 16TH
Days:
previous day
next day
all days

View: session overviewtalk overview

08:30-10:00 Session MA1: Optimization Under Uncertainty in Energy Systems
08:30
Stochastic DC Optimal Power Flow With Reserve Saturation
PRESENTER: Line Roald

ABSTRACT. We propose an optimization framework for stochastic optimal power flow with uncertain loads and renewable generator capacity. Our model follows previous work in assuming that generator outputs respond to load imbalances according to an affine control policy, but introduces a model of saturation of generator reserves by assuming that when a generator's target level hits its limit, it abandons the affine policy and produces at that limit. This is a particularly interesting feature in models where wind power plants, which have uncertain upper generation limits, are scheduled to provide reserves to balance load fluctuations.

The resulting model is a nonsmooth nonconvex two-stage stochastic program, and we use a stochastic approximation method to find stationary solutions to a smooth approximation. Computational results on 6-bus and 118-bus test instances demonstrates that by considering the effects of saturation, our model can yield solutions with lower expected generation costs (at the same target line violation probability level) than those obtained from a model that enforces the affine policy to stay within generator limits with high probability.

08:52
A Reserve Scheduling Framework for Uncertainty Management in Power Systems and Gas Pipelines
PRESENTER: Kaarthik Sundar

ABSTRACT. The growing dependence of the bulk electricity system on natural gas as a fuel for generation continues to attract attention. Many approaches for coordination and co-optimization of power systems and gas pipelines have been proposed, some of which have considered uncertainty in loads. In this study, we will review the emerging trends in the energy sector that motivate research on integrated optimization and coordination mechanisms for gas pipelines and the electric grid, with a particular focus on managing intra-day uncertainty. We consider formal mathematical mechanisms for the allocation of reserves to compensate for uncontrollable renewable electricity generation, the economic dispatch of controllable generators in electricity markets, and the management of gas pipeline flows to buffer uncertainties in consumption. We will summarize recent results and approaches to these problems, discuss models to address inter-relationships of these three mechanisms, summarize the critical issues, and identify open challenges. As an original contribution, we propose an optimization framework for integrated dispatching and reserve scheduling in electricity transmission and natural gas transport, which synthesizes established approaches for chance-constrained optimal power flow and robust transient optimization of natural gas flow. We consider several joint optimization formulations with increasing degrees of uncertainty management, and apply them to optimize a case study of interacting test networks. Monte Carlo simulations are then conducted to verify the probabilistic guarantees for operational security of the interconnected system provided by the proposed framework.

09:14
Distributionally Robust Transmission Grid under Geomagnetic Disturbances
PRESENTER: Minseok Ryu

ABSTRACT. The transmission grid could be damaged by geomagnetic disturbances (GMDs), caused by a solar storm. The negative impacts of GMDs are possibly mitigated via a set of corrective actions (e.g., line switching, generator dispatch, and locating blocking devices). Making such decisions is challenging due to the uncertainty in the magnitude and orientation of GMDs and insufficient historical data. In this paper, we propose a two-stage distributionally robust optimization (DRO) model that determines the corrective actions such that the worst-case expectation of the total cost is minimized. For the solution approach, we propose a column-and-constraint generation (CCG) algorithm that solves mixed-integer second-order conic programs iteratively. Moreover, we derive a monolithic reformulation of the proposed DRO model under the triangle support set, which can be used to speed up the CCG algorithm. We test our solution approach to ‘epri-21’ and ‘uiuc-150’ systems.

09:36
Optimal Power Flow in Distribution Networks under Stochastic N-1 Disruptions
PRESENTER: Haoxiang Yang

ABSTRACT. Most contingency research of power systems focuses on the transmission system, but it is also important to include contingencies in operations and have a post-contingency recovery plan for distribution systems. In this paper, we consider the multi-time period optimal power flow (OPF) problem in distribution systems subject to stochastic N-1 contingency/disruption in a distribution line or a distributed energy resource (DER). Practically speaking, these disruptions are stochastic in nature, a type of infrequent event with random magnitude and timing, and that every disruption typically has an associated recovery time. Hence, we formulate a novel multi-stage stochastic convex program and develop a stochastic dual dynamic programming (SDDP) algorithm for the distribution network OPF problem under stochastic N-1 disruptions with recovery times per disruption. Further, we include modeling features like linearized AC power flow physics, engineering limits and battery devices with efficiencies curves. We demonstrate the computational effectiveness of our SDDP algorithm over deterministic methods. We also show that stochastic-disruption-aware operating costs can be significantly cheaper than the deterministic counterparts.

08:30-10:00 Session MA2: Algorithmic and Theoretical Advances in Distributionally Robust Optimization
Location: Gold Ballroom
08:30
A Sequential Sampling Method for Distributionally Robust Linear Programs

ABSTRACT. In this talk, we will present a sequential sampling method to address the two-stage distributionally robust linear programs. The distributionally robust or the minimax approach to stochastic programming concerns with the problem of minimizing the worst-case expected value of the second-stage value function with respect to the set of probability measures that are consistent with the available information on the random data. We consider a case where the set of continuous/discrete distributions defined over a compact sample space is approximated using a growing sample and study the impact of adding new observations on the optimal value and solutions. For such a setting, we develop a cutting plane method that integrates simulation within an optimization algorithm, and thereby, leading to an "internal-sampling" based approach. The use of sampling within optimization and the cutting-plane approach mark a significant deviation of our approach from the prevalent external sampling based methods used to solve distributionally robust optimization problems. We will provide a brief analysis of convergence and computational efficacy of this algorithm for solving distributionally robust variants of classical stochastic programming problems.

08:52
(Partial) Convex Hull for Deterministic and Stochastic Structured Conic Mixed Integer Sets using Conic Mixed Integer Rounding
PRESENTER: Manish Bansal

ABSTRACT. In this talk, we present (partial) convex hull for deterministic multi-constraint conic mixed integer sets with multiple integer variables using conic mixed integer rounding cut-generation procedure of Atamturk and Narayanan (2008), thereby extending their result for a simple polyhedral conic mixed integer set with single constraint and one integer variable. Moreover, we introduce two-stage distributionally robust conic mixed integer programs (denoted by TSDR-CMIPs) in which the second stage problems have sum of l1-norms in the objective function along with integer variables. We consider TSDR-CMIPs with structured CMIPs in the second stage, derive classes of scenario-based (non)-linear cuts for them using conic mixed integer rounding, and prove that these cuts provide convex programming equivalent, under certain conditions, for the second stage CMIPs. We also present sufficient conditions under which the addition of scenario-based nonlinear cuts in the extensive formulation of TSDR-CMIPs is sufficient to relax the integrality restrictions on the second stage integer variables without impacting the integrality of the optimal solution of the TSDR-CMIP. We present results of computational experiments for solving a stochastic capacitated facility location problem and randomly generated structured instances using the conic mixed integer rounding cuts.

09:14
Using Effective Scenarios to Accelerate Decomposition Algorithms for Two-stage Distributionally Robust Optimization with Total Variation Distance
PRESENTER: Chennan Zhou

ABSTRACT. This work aims to accelerate Benders’ decomposition algorithm for a class of two-stage distributionally robust optimization (DRO) problems with total variation distance by focusing the algorithm’s effort to effective scenarios. Effective scenarios are critical to DRO in the sense that, when removed, they alter the optimal value. Ineffective scenarios, on the other hand, can be removed from the problem without affecting the optimal value. By using an approximation of the sets of ineffective and effective scenarios, acceleration of the decomposition algorithm is carried out via (1) cut management strategies (only adding cuts for approximated effective scenarios) and (2) solving fewer subproblems (progressively removing approximated ineffective scenarios within the algorithm). A set of conditions under which the accelerated algorithm converges in a finite number of iterations is established.

Numerical results on several problems from the literature show that the accelerated algorithm decreases solution times up to 70%, obtaining solutions generally better than 0.02% of optimal. The results also indicate that the performance of different acceleration strategies depends on the risk level, iterations needed to solve the original problem, and the solution time ratio between the master problem and the subproblems.

08:30-10:00 Session MA3: Recent Advancements in Lot-Sizing and Knapsack Problems
Location: Card Room
08:30
Exact Algorithms for Lot-Sizing Problems with Multiple Capacities, Piecewise Concave Production Costs, and Subcontracting
PRESENTER: Kartik Kulkarni

ABSTRACT. In this paper, we introduce a new generalization of the classical capacitated lot-sizing problem with concave production, holding, and subcontracting costs, denoted by CLSP-S, in which the total production capacity in each time period is the summation of binary multiples of n different capacities. We refer to this problem as multi-capacitated lot-sizing problem (with subcontracting), and denote it by MCLS-(S). We present a dynamic programming approach to optimally solve the MCLS and MCLS-S, for a given n, in polynomial time. Note that for n = 1, the MCLS-S reduces to the CLSP-S and therefore, our algorithm generalizes the algorithm of Atamturk and Hochbaum [Management Science 47(8):1081-1100, 2001] for CLSP-S. We also develop another dynamic programming algorithm for the MCLS-S with piecewise concave production costs and no capacity constraint, i.e. n = 0, denoted by LS-PC-S. Furthermore, we perform computational experiments to evaluate the efficiency of our algorithms and their parallel computing implementations for solving instances of MCLS and LS-PC, in comparison to CPLEX 12.7 and the only known algorithm for LS-PC-S by Koca, Yaman, and Akturk [INFORMS J. on Computing 26(4):767-779, 2014]. The results of these experiments show that our algorithms are computationally efficient and their solution times are stable.

08:52
A Heuristic and Exact Solution Approach for the 0-1 Cubic Knapsack Problem

ABSTRACT. The 0-1 cubic knapsack problem (CKP) is a generalization of the classical 0-1 quadratic knapsack problem. We develop an effective heuristic for the CKP and utilize a linear reformulation to find exact solutions. Using a variable reordering strategy, we show how to improve the strength of the linear programming relaxations and reduce the overall solution times. Computational tests demonstrate the effectiveness of both our heuristic method and variable reordering strategy.

09:14
A Unified Approach to Mixed-Integer Optimization: Nonlinear Formulations and Scalable Algorithms
PRESENTER: Ryan Cory-Wright

ABSTRACT. We propose a unified framework to address a family of classical mixed-integer optimization problems, including network design, facility location, unit commitment, sparse portfolio selection, binary quadratic optimization and sparse learning problems. These problems exhibit logical relationships between continuous and discrete variables, which are usually reformulated linearly using a big-M formulation. In this work, we challenge this longstanding modeling practice and express the logical constraints in a non-linear way. By imposing a regularization condition, we reformulate these problems as convex binary optimization problems, which are solvable using an outer-approximation procedure. In numerical experiments, we establish that a general-purpose numerical strategy, which combines cutting-plane, first-order and local search methods, solves these problems faster and at a larger scale than state-of-the-art mixed-integer linear or second-order cone methods. Our approach successfully solves network design problems with 100s of nodes and provides solutions up to 40% better than the state-of-the-art; sparse portfolio selection problems with up to 3, 200 securities compared with 400 securities for previous attempts; and sparse regression problems with up to 100, 000 covariates.

09:36
The Competitive Ratio of Threshold Policies for Online Unit-density Knapsack Problems
PRESENTER: Jinglong Zhao

ABSTRACT. We study an online knapsack problem where the items arrive sequentially and must be either immediately packed into the knapsack or irrevocably discarded. Each item has a different size and the objective is to maximize the total size of items packed. We focus on the class of randomized algorithms which initially draw a threshold from some distribution, and then pack every fitting item whose size is at least that threshold. Threshold policies satisfy many desiderata including simplicity, fairness, and incentive-alignment. We derive two optimal threshold distributions, the first of which implies a competitive ratio of 0.432 relative to the optimal offline packing, and the second of which implies a competitive ratio of 0.428 relative to the optimal fractional packing. We also consider the generalization to multiple knapsacks, where an arriving item has a different size in each knapsack and must be placed in at most one. This is equivalent to the AdWords problem where item truncation is not allowed. We derive a randomized threshold algorithm for this problem which is 0.214-competitive. We also show that any randomized algorithm for this problem cannot be more than 0.461-competitive, providing the first upper bound which is strictly less than 0.5. This online knapsack problem finds applications in many areas, like supply chain ordering, online advertising, and healthcare scheduling, refugee integration, and crowdsourcing. We show how our optimal threshold distributions can be naturally implemented in the warehouses for a Latin American chain department store. We run simulations on their large-scale order data, which demonstrate the robustness of our proposed algorithms.

08:30-10:00 Session MA4: Quantum Optimization
08:30
Template-based Minor Embedding for Adiabatic Quantum Optimization
PRESENTER: Thiago Serra

ABSTRACT. Quantum Annealing (QA) can be used to quickly obtain near-optimal solutions for Quadratic Unconstrained Binary Optimization (QUBO) problems. In QA hardware, each decision variable of a QUBO should be mapped to one or more adjacent qubits in such a way that pairs of variables defining a quadratic term in the objective function are mapped to some pair of adjacent qubits. However, qubits have limited connectivity in existing QA hardware. This has spurred work on preprocessing algorithms for embedding the graph representing problem variables with quadratic terms in the hardware graph representing qubits adjacencies, such as the Chimera graph in hardware produced by D-Wave systems. In this paper, we use integer linear programming to search for an embedding of the problem graph into certain classes of minors of the Chimera graph, which we call template embeddings. One of these classes corresponds to complete bipartite graphs, for which we show the limitation of the existing approach based on minimum Odd Cycle Transversals (OCTs). Some of the formulations presented are exact, and thus can be used to certify the absence of a minor embedding. On an extensive test set consisting of random graphs from five different classes of varying size and sparsity, we can embed 38% more graphs than a state-of-the-art OCT-based approach.

08:52
Integer programming techniques for minor-embedding in quantum annealers
PRESENTER: David Bernal

ABSTRACT. A major limitation of current generations of quantum annealers is the sparse connectivity of manufactured qubits in the hardware graph. This technological limitation generated considerable interest, motivating efforts to design efficient and adroit minor-embedding procedures that bypass sparsity constraints. In this paper, starting from a previous equational formulation by Dridi et al. (arXiv:1810.01440), we propose integer programming (IP) techniques for solving the minor-embedding problem. The first approach involves a direct translation from the previous equational formulation to IP, while the second decomposes the problem into an assignment master problem and fiber condition checking subproblems. The proposed methods are able to detect instance infeasibility and provide bounds on solution quality, capabilities not offered by currently employed heuristic methods. We demonstrate the efficacy of our methods with an extensive computational assessment involving three different families of random graphs of varying sizes and densities. The direct translation as a monolithic IP model can be solved with existing commercial solvers yielding valid minor-embeddings, however, is outperformed overall by the decomposition approach. Our results demonstrate the promise of our methods for the studied benchmarks, highlighting the advantages of using IP technology for minor-embedding problems.

09:14
The Potential of Quantum Annealing for Rapid Solution Structure Identification
PRESENTER: Carleton Coffrin

ABSTRACT. The recent emergence of novel computational devices, such as quantum computers, neuromorphic co-processors and digital annealers presents new opportunities for hardware accelerated hybrid optimization algorithms. Unfortunately, demonstrations of unquestionable performance gains leveraging novel hardware platforms have faced significant obstacles. One key challenge is understanding the algorithmic properties that distinguish such devices from established optimization approaches. Through the careful design of contrived optimization tasks, this work provides new insights into the computation properties of quantum annealing and suggests that this model has an uncanny ability to avoid local minima and quickly identify the structure of high quality solutions. A meticulous comparison to a variety of algorithms spanning both complete and local search suggest that quantum annealing's performance on the proposed optimization tasks is unique. This result provides new insights into the time scales and types of optimization problems where quantum annealing has the potential to provide notable performance gains over established optimization algorithms and prompts the development of hybrid algorithms that combine the best features of quantum annealing and state-of-the-art classical approaches.

09:36
Multistart Methods for Quantum Approximate Optimization
PRESENTER: Ruslan Shaydulin

ABSTRACT. Hybrid quantum-classical algorithms such as the quantum approximate optimization algorithm (QAOA) are considered one of the most promising approaches for leveraging near-term quantum computers for practical applications. Such algorithms are often implemented in a variational form, combining classical optimization methods with a quantum machine to find parameters to maximize performance. The quality of the QAOA solution depends heavily on quality of the parameters produced by the classical optimizer. Moreover, the presence of multiple local optima in the space of parameters makes it harder for the classical optimizer. In this paper we study the use of a multistart optimization approach within a QAOA framework to improve the performance of quantum machines on important graph clustering problems. We also demonstrate that reusing the optimal parameters from similar problems can improve the performance of classical optimization methods, expanding on similar results for MAXCUT.

08:30-10:00 Session MA5: Data Driven Optimization
Location: Furman Room
08:30
Data-driven optimization for wind farm siting

ABSTRACT. Stochastic methods to explicitly model the uncertainty of wind power output are essential for the effective integration of this renewable energy source into the supply network. Recent years have seen many advances in better forecasting models for wind power output, typically focusing on a single wind farm that has already been built. There is little work, however, on deciding the optimal placement of a wind farm ex ante, while considering the effect power system dynamics will have on its output. This study develops an optimization model to find the “ideal” location of additional wind farms within a power system. Utilizing historical data of wind power output for a set of established wind farms belonging to the Energy Reliability Council of Texas (ERCOT), this data-driven optimization problem explicitly considers the temporal and spatial dynamics of the wind power generation process via association models for spatio-temporal random fields. Subject to the power system transmission constraints, the model solves for optimal wind farm locations. This talk will present contributions to the characterization of the problem, the methods to solve it, solution implications and future work.

08:52
Reinforcement Learning under Drift: The Blessing of (More) Optimism
PRESENTER: Ruihao Zhu

ABSTRACT. We propose algorithms with state-of-the-art dynamic regret bounds for un-discounted reinforcement learning under drifting non-stationarity, where both the reward functions and state transition distributions are allowed to evolve over time as long as the respective total changes do not exceed certain variation budgets. We first develop a tuned Sliding Window Upper-Confidence bound for Reinforcement Learning with Confidence Widening (SWUCRL2-CW) algorithm, and show that it attains low dynamic regret bound via a novel budget-aware analysis. We then present the Bandit-over-Reinforcement Learning (BORL) framework that further permits us to enjoy a parameter-free dynamic regret bound. Notably, we identify an unprecedented challenge in parameters estimation for reinforcement learning in gradually changing environments, which invalids existing algorithmic frameworks for non-stationary online learning. We demonstrate how a more optimistic design of learning algorithm can help to mitigate the issue.

09:14
JANOS : An Integrated Predictive and Prescriptive Modeling Framework
PRESENTER: David Bergman

ABSTRACT. Business research practice is witnessing a surge in the integration of predictive modeling and prescriptive analysis. We describe a modeling framework JANOS that seamlessly integrates the two streams of analytics, for the first time allowing researchers and practitioners to embed machine learning models in an optimization framework. JANOS allows for specifying a prescriptive model using standard optimization modeling elements such as constraints and variables. The key novelty lies in providing modeling constructs that allow for the specification of commonly used predictive models and their features as constraints and variables in the optimization model. The framework considers two sets of decision variables; regular and predicted. The relationship between the regular and the predicted variables are specified by the user as pre-trained predictive models. JANOS currently supports linear regression, logistic regression, and neural network with rectified linear activation functions, but we plan to expand on this set in the future. In this paper, we demonstrate the flexibility of the framework through an example on scholarship allocation in a student enrollment problem and provide a numeric performance evaluation.

09:36
Data-driven sample average approximation with covariate information
PRESENTER: Rohit Kannan

ABSTRACT. We consider data-driven formulations for two-stage stochastic programs where the decision-maker has access to a finite number of samples of the random variables and concurrently observed covariates. The goal is to approximate the solution to the true stochastic program conditioned on a new random realization of the covariates. We investigate three data-driven sample average approximation (SAA) frameworks for approximating the solution to the true conditional stochastic program. These formulations are constructed by fitting a model to predict the random variables given covariate realizations, using the prediction model and its residuals on the data samples to construct scenarios of the random variables at the new covariate realization, and employing these residuals within a SAA framework to obtain data-driven solutions. Two of the SAA frameworks are new and use residuals of leave-one-out prediction models for scenario generation. We derive conditions on the data generation process, the prediction model, and the stochastic program under which solutions of these data-driven SAAs converge to solutions of the true conditional stochastic program at a certain rate. Computational results on resource planning instances validate our theoretical results and demonstrate the benefits of our new data-driven SAA approaches over existing approaches in the limited data regime.

08:30-10:00 Session MA6: First-Order Methods for Non-Smooth and Constrained Optimization
Chair:
Location: Palmetto Room
08:30
Online Primal-Dual Mirror Descent under Stochastic Constraints
PRESENTER: Michael J. Neely

ABSTRACT. We consider online convex optimization with stochastic constraints where the objective functions are arbitrarily time-varying and the constraint functions are independent and identically distributed (i.i.d.) over time. Both the objective and constraint functions are revealed after the decision is made at each time slot. The best known expected regret for solving such a problem is O(sqrt(T)), with a coefficient that is polynomial in the dimension of the decision variable and relies on the Slater condition (i.e. the existence of interior point assumption), which is restrictive and in particular precludes treating equality constraints. In this paper, we show that such Slater condition is in fact not needed. We propose a new primal-dual mirror descent algorithm and show that one can attain O(sqrt(T)) regret and constraint violation under a much weaker Lagrange multiplier assumption, allowing general equality constraints and significantly relaxing the previous Slater conditions. Along the way, for the case where decisions are contained in a probability simplex, we reduce the coefficient to have only a logarithmic dependence on the decision variable dimension. Such a dependence has long been known in the literature on mirror descent but seems new in this new constrained online learning scenario.

08:52
Zeroth-order Nonconvex Stochastic Optimization: Handling Constraints, High-Dimensionality, and Saddle-Points
PRESENTER: Saeed Ghadimi

ABSTRACT. In this talk, we propose and analyze zeroth-order stochastic approximation algorithms for nonconvex and convex optimization, with a focus on addressing constrained optimization, high-dimensional setting and saddle-point avoiding. To handle constrained optimization, we first propose generalizations of the conditional gradient algorithm achieving rates similar to the standard stochastic gradient algorithm using only zeroth-order information. To facilitate zeroth-order optimization in highdimensions, we explore the advantages of structural sparsity assumptions. Specifically, (i) we highlight an implicit regularization phenomenon where the standard stochastic gradient algorithm with zerothorder information adapts to the sparsity of the problem at hand by just varying the step-size and (ii) propose a truncated stochastic gradient algorithm with zeroth-order information, whose rate of convergence depends only poly-logarithmically on the dimensionality. We next focus on avoiding saddle-points in nonconvex setting. Towards that, we interpret the Gaussian smoothing technique for estimating gradient based on zeroth-order information as an instantiation of first-order Stein's identity. Based on this, we provide a novel linear-(in dimension) time estimator of the Hessian matrix of a function using only zeroth-order information, which is based on second-order Stein's identity. We then provide a zeroth-order variant of cubic regularized Newton method for avoiding saddle-points and discuss its rate of convergence to local minima.

09:14
Level-Set Method for Convex Constrained Optimization with Error Bound Conditions
PRESENTER: Runchao Ma

ABSTRACT. Optimization models with convex constraints arise in many tasks in machine learning, e.g., learning with data-driven constraints or Neyman-Pearson classification. Although many efficient algorithms with provable convergence rate have been developed for convex unconstrained problems, it still remains challenging to design efficient algorithms for convex problems with functional constraints. This paper proposes a first-order algorithm for solving convex constrained problem when error bound condition holds. We reformulate the convex constrained problem to a level-set problem and then solve it using a sequence of processors. Each subproblem of the level-set problem is solved by subgradient method and communication between processors are carefully designed. Under the mild assumption that error bound condition holds for each level-set subproblem, the iteration complexity of deriving an ε-optimal and ε-feasible solution is established.

09:36
Powerful Operator Splitting with Projective Splitting

ABSTRACT. Projective splitting (PS) is a powerful new approach to operator splitting for solving convex optimization problems and monotone inclusions. Unlike its more famous cousins such as ADMM, Douglas-Rachford, and Forward-Backward, PS can handle an arbitrary sum of convex functions composed with linear maps, can seamlessly incorporate asynchronous and incremental updates, and allows for arbitrarily varying stepsizes. Over the past two years, we have developed several new variants of PS which for the first time can use forward steps rather than only resolvents. In the optimization context, a forward step is just an explicit gradient evaluation, while a resolvent step is a proximal operator evaluation. Under various continuity conditions on the gradient, we have extended PS to allow for these simpler gradient evaluations over the more complicated and computationally costly proximal steps. With this progress, projective splitting has become a truly modern operator splitting framework with great potential in various applications such as machine learning. In this talk I will give an overview of the method, discuss the new forward-step procedures, and some of our software and computational results.

08:30-10:00 Session MA7: Novel Heuristics for Challenging Optimization Problems in Scheduling and Districting
Location: Alexander Room
08:30
Improving Operating Room Scheduling: Integer and Constraint Programming Approaches

ABSTRACT. With the improvements in engineering and technology, we have the chance to observe various examples which can be counted in the context of difficult optimization problems. Integer programming models is one of these kind of optimization problems. To solve integer programming problems, not only the solution method is important, but also the model should be designed correctly. In problem-solving, the computation speed is also considered as an important measure regarding the efficiency of the problem. Besides developing new methods to solve an integer programming problem, developing new computational techniques has also been studied to obtain a solution in a shorter amount of time. Constraint programming can be considered as one of these techniques, which has a procedural formulation that specifies how to look for a solution, while the integer programming shows a declarative formulation which is stated by the constraints and objective function. In the procedural model, one can search the solution by considering the constraint set which restricts the variables to take some values. In this study, a problem on improving operating room schedules is examined by applying both integer and constraint programming approaches. The results that are obtained from both models are compared and discussed.

08:52
Multilevel algorithms for geographical districting problems
PRESENTER: Rahul Swamy

ABSTRACT. In a geographical districting problem (GDP), a planar region is partitioned into districts that are connected, balanced, compact and hole-free. GDPs arise in several applications such as in political redistricting, commercial territory design, police districting, among others. A multilevel algorithm is a heuristic where several levels of graphs are iteratively constructed by “coarsening” the prior graph levels, solving the problem optimally at the coarsest level, and mapping the solution back to the original graph. Recent work has adapted the multilevel framework to GDPs using matchings to coarsen the graphs. In this work, we investigate how a matching-based coarsening scheme affects the approximation error in the districting solution obtained. Motivated by political redistricting, two objectives – population balance and compactness – are considered. An analysis was performed by exploiting the relationship between the matching edges and the cut edges. The results suggest that when there exists an optimal solution with high district connectivity, a better approximation bound is possible. Experimental results based on congressional redistricting are also presented.

09:14
Bi-Objective Firefly Optimization Algorithm for n-Stage Parallel Machine Scheduling

ABSTRACT. In this present era where the demand for customized products is at its record height and sustainability of the manufacturing industry is solely dependent on delivering quality products to the end-users on time. This objective could be achieved by determining the manufacturing sequence in which the jobs have to be processed. This phenomenon of determining the optimal sequence is termed Sequencing and scheduling (S&S). S&S are the means for making decisions about the order and time in which a set of jobs are to be processed through a set of the machine(s) to get the output in a desired order and time. The main objectives of S&S problems are minimizing the makespan value and total flow time. Moreover, in industry, it is customary to install parallel machines that can operate the same set of operations at every stage in the production shop. A parallel machine scheduling problem in a medium-sized enterprise is studied in this paper. The time taken for processing a job is uncertain as it is dependent on the setup and unloading time which are mostly performed manually. Therefore the processing time is considered as the average time taken for processing 10 similar jobs on the same machine. Above that parallel machine scheduling problem is an NP-hard problem which implies a lack of existence of a model that can return an optimal solution within a feasible time-bound. Due to the advantages of other evolutionary algorithms, a Firefly optimization algorithm is applied in solving multi-level parallel machine scheduling problems.

10:30-11:30 Session Plenary II: The ARPA-E Grid Optimization Competition (Speaker: Andreas Waechter)

Abstract:

During November 2018 and October 2019, the Advanced Research Projects Agency-Energy (ARPA-E) conducted a high-profile competition for the robust optimization of large-scale electrical power grids.  26 teams from academia and industry developed tailored optimization algorithms and high-performance software to compete for a total of $4,000,000 in prize money. The competition provided a unique opportunity for researchers to explore a challenging and important industrial application with realistic data sets and high-performance hardware.  The Security-Constrained Optimal Power Flow (SCOPF) problem formulated by the organizers optimizes the power production of generators in large-scale electricity transmission networks. Several characteristics make the solution of this problem very challenging:  (i) The power generation must be allocated in a way so that a simple recourse rule is sufficient to maintain feasibility under thousands of N-1 contingencies in which one element of the grid fails; (ii) in contrast to linear models currently used in industry, the formulation involves the nonlinear nonconvex alternating current (AC) power flow equations; (iii) discrete switches formulated as complementarity constraints further add to the nonconvexity of the model; and (iv) with up to 30,000 buses, the test instances are very large. In this talk, we describe the solution approach developed by the GO-SNIP team which scored second in the overall competition.

Bio:

Andreas Wächter is a Professor in the Department of Industrial Engineering and Management Sciences at Northwestern University.  He obtained his master's degree in Mathematics at the University of Cologne, Germany, in 1997, and his Ph.D. in Chemical Engineering at Carnegie Mellon University in 2002.  Before joining Northwestern University in 2011, he was a Research Staff Member in the Department of Mathematical Sciences at IBM Research in Yorktown Heights, NY.  His research interests include the design, analysis, implementation and application of numerical algorithms for nonlinear continuous and mixed-integer optimization.  He is a recipient of the 2011 Wilkinson Prize for Numerical Software and the 2009 Informs Computing Society Prize for his work on the open-source optimization package Ipopt.

Location: Gold Ballroom
13:15-14:45 Session MB1: Data-Driven Optimization: Theory, Algorithms and Applications
13:15
Online Learning in Service Systems: Optimal Pricing And Capacity Sizing For A G/G/1 Queue With Demand Learning
PRESENTER: Yunan Liu

ABSTRACT. We consider a pricing and capacity sizing problem in a service queueing system with an unknown demand curve. The service provider’s objective is to adaptively adjust the service price p and service rate µ so as to maximize cumulative expected revenues (the service revenue minus the delay penalty) over a given finite time horizon; in doing so, the service provider needs to resolve the tension between learning the unknown demand curve λ(p) and maximizing earned revenues. In fact, such a problem has no analytic solution even when λ(p) is unknown. In this study, we develop an online algorithm based on stochastic gradient descent (SGD) and provide asymptotic bounds for the regret (gap between the revenue when demand curve is known and the revenue earned using our online learning algorithm). We also discuss possible accelerating methods for the algorithm.

13:37
Tractable Reformulations of Distributionally Robust Two-stage Stochastic Programs with $\infty-$Wasserstein Distance

ABSTRACT. This paper studies distributionally robust two-stage stochastic programs (DRTSP) with ∞-Wasserstein distance by providing sufficient conditions under which the worst-case expected recourse function in DRTSP can be efficiently computed via a tractable convex program. The developed reformulation techniques are extended to DRTSP with binary random parameters. The main tractable reformulations in this paper are projected into the original decision space and thus can be interpreted as conventional two-stage stochastic programs under discrete support with extra penalty terms enforcing the robustness. These tractable results are further demonstrated to be sharp through complexity analysis.

13:59
Learning Interpretable Policies for Markov Decision Process by Hybrid Linear-Black-Box Model

ABSTRACT. Interpretable policy has become a strong competitor for traditional black-box policy for Markov decision process. However, the possible reduction of the reward for gaining interpretability is often inevitable, putting practitioners in a dilemma of choosing between high reward (black-box policy) and interpretability (interpretable policy). In this work, we propose a novel framework for learning a Hybrid Interpretable Policy (HIP) that integrates a linear policy with any black-box policy to combine their strengths. The linear policy substitutes the black-box policy on a subset of state space where the black-box is overkill. We train this policy by formulating an optimization problem that integrate both the reward maximization and policy interpretability. We test the hybrid policy on inventory control problems with various black-box policy. Results show that hybrid models obtain an efficient trade-off between interpretability and reward maximization, characterized by efficient frontiers.

14:21
Data-Driven Distributionally Robust Appointment Scheduling
PRESENTER: Guanglin Xu

ABSTRACT. Stochastic decision making is an important approach to solve problems arising in many contexts including clinical appointment scheduling applications. A fundamental assumption of using such an approach is that the probability distribution of the stochastic parameters is completely known to the decision maker. However, it is challenging or even impossible to fully estimate the underlying distribution in many real-life situations. In the context of appointment scheduling, it is common that there is insufficient data. For example, in the application of operating room management, there are not enough samples for approximately 50 percent of surgeries performed on any given day across the United States. This ambiguity in the probability distribution introduces significant modeling challenges. To address this issue, we introduce a data-driven distributionally robust framework in which a decision maker seeks a schedule that minimizes the worst-case expected waiting, idleness, and overtime costs. Importantly, the expectations are taken with respect to distributions from a so-called ambiguity set that is constructed by using historical data. We establish a finite-sample performance guarantee and asymptotic consistency of the data-driven distributionally robust approach, and develop scalable solution methods for solving the resulting decision-making problems. We showcase the effectiveness of our technical approach and solution methods via numerical experiments.

13:15-14:45 Session MB2: Optimization in Neural Networks
Chair:
Location: Gold Ballroom
13:15
Nonlinear Regularization for Neural Networks

ABSTRACT. The training of the neural networks (determination of the weights) belong to the class of nonlinear inverse problems. These ill-posed problems in the Hadamard sense require reformulation for numerical processing otherwise existence of a unique solution that depends continuously on the data can’t be guaranteed. This method is known as regularization. Let $F(u) = y^{\delta}$ denote the nonlinear inverse ill-posed problem, where $F$ is a nonlinear operator and $y^\{delta}$ noisy samples of $y$ such that \left\|y – y^\{delta}\right\| \leq \delta$, for some noisy level $\delta > 0$. Constructing a (variational) regularization for nonlinear inverse problems of this form consists of finding two functionals: a data fidelity term $E$ (e.g., quadratic fidelity) measuring the distance between $F(u)$ and $y^{\delta}$, and a regularization functional $R$ favoring appropriate minimizers or penalizing potential solutions with undesired structures. A key question in this respect is the convergence for noisy data, related to the choice of the regularization parameter $\alpha$ in dependence on the noise level \delta. With too little regularization, the regularized approximations $u_{\alpha}^{\delta}$ -that minimize the total error, a.k.a. generalized Tikhonov functional- are highly oscillatory due to noise amplification whereas with too much regularization, the regularized approximations are too close to the initial guess $u^{*}$. Thus, one should select the regularization parameter such that the total error $\left\|u_{\alpha}^{\delta} – u{*}\right\|$ is minimized. Though using the term regularization in much broader sense than its formal definition (with stability, convergence and efficiency criteria), many experimental papers on neural network learning show that linear regularization functionals don’t explain or can’t improve the properties of the minimizer of the Tikhonov functional. However, regularization methods include also non-quadratic and total variation (TV) functionals. In this paper, we demonstrate how a TV functional combined with an iterative regularization scheme yields a competitive method for training neural networks. The gain compared to Tikhonov regularization is illustrated by numerical examples.

13:37
The Role of Optimization Algorithms in Generalization of Deep Learning
PRESENTER: Navid Azizan

ABSTRACT. Most modern learning problems are highly overparameterized, meaning that the model has many more parameters than the number of training data points, and as a result, the training loss may have infinitely many global minima (in fact, a manifold of parameter vectors that perfectly interpolates the training data). Therefore, it is important to understand which interpolating solutions we converge to, how they depend on the initialization point and the learning algorithm, and whether they lead to different generalization performances. In this paper, we study these questions for the family of stochastic mirror descent (SMD) algorithms, of which the popular stochastic gradient descent (SGD) is a special case. Our contributions in this paper are both theoretical and experimental. On the theory side, we show that in the overparameterized nonlinear setting, if the initialization is close enough to the manifold of global optima, SMD with sufficiently small step size converges to a global minimum that is approximately the closest global minimum in Bregman divergence, thus attaining approximate implicit regularization. For highly overparametrized models, this closeness comes for free: the manifold of global optima is so high-dimensional that with high probability an arbitrarily chosen initialization will be close to the manifold. On the experimental side, our extensive experiments on the MNIST and CIFAR-10 datasets, using various initializations, various mirror descents, and various Bregman divergences, consistently confirms that this phenomenon indeed happens in deep learning: SMD converges to the closest global optimum to the initialization point in the Bregman divergence of the mirror used. Our experiments further indicate that there is a clear difference in the generalization performance of the solutions obtained from different SMD algorithms. Experimenting on the CIFAR-10 dataset with different regularizers, $\ell_1$ to encourage sparsity, $\ell_2$ (yielding SGD) to encourage small Euclidean norm, and $\ell_{10}$ to discourage large components in the parameter vector, consistently and definitively shows that, for small initialization vectors, $\ell_{10}$-SMD has better generalization performance than SGD, which in turn has better generalization performance than $\ell_1$-SMD. This surprising, and perhaps counter-intuitive, result strongly suggests the importance of a comprehensive study of the role of regularization, and the choice of the best regularizer, to improve the generalization performance of deep networks.

13:59
Convergent variants of gradient descent for neural networks

ABSTRACT. In practical applications of gradient descent to complex non-linear problems the choice of step size is based on trial-and-error and other heuristics. A case when it is easy to choose the step sizes is when the function to minimize has a bounded second derivative, but many problems in machine learning do not appear at first sight to have this property. However, this boundedness property can often be verified if we allow a spatially varying notion of length in the form of a Finsler structure. When this occurs, a rigorous gradient descent variant can be derived using a corresponding Finsler duality structure. In certain cases, relevant problem structure can be encoded in the metric, leading to tight bounds, while still enabling efficient solution of the resulting subproblems. In particular, by constructing a metric which is adapted to the hierarchical structure of a multilayer neural network, we obtain an optimization algorithm for this problem with non-asymptotic performance guarantees. Experiments on benchmark problems suggest this method can be useful as a rigorous alternative to current, heuristic versions of gradient descent.

14:21
On the Benefit of Width for Neural Networks: Disappearance of Basins
PRESENTER: Ruoyu Sun

ABSTRACT. Wide networks are often believed to have nice optimization land- scape, but what rigorous results can we prove? To understand the benefit of width, it is important to identify the difference between wide and narrow networks. In this work, we prove that from narrow to wide networks, there is a phase transition from having sub-optimal basins to no sub-optimal basins. Specifically, we prove that for any continuous activation functions, the loss surface of a class of wide networks has no sub-optimal basin, where “basin” is defined as the setwise strict local minimum. Furthermore, for a large class of networks with width below a threshold, we construct strict local minimum that is not global. These two results together show the phase transition from narrow to wide networks. We also point out a subtle cavity in the proof of [64] that claimed to prove no sub-optimal local minima exists for 1-hidden-layer wide neural network.

13:15-14:45 Session MB3: Theoretical and Computational Advances in Combinatorial Optimization
Location: Card Room
13:15
A Combinatorial Cut-and-Lift Procedure with an Application to 0-1 Chance Constraints
PRESENTER: Margarita Castro

ABSTRACT. Cut generation and lifting are critical components for the performance of state-of-the-art integer programming solvers. This work proposes a new cut-and-lift procedure that relies on the combinatorial structure of binary problems via a Decision Diagram (DD) encoding of its constraints. We present a general framework that can be applied to a large range of binary optimization problems and show its applicability for normally distributed linear chance constraints. Our preliminary results show encouraging performance surpassing state-of-the-art integer programming solvers.

13:37
Ranking-based Facet Defining Inequalities for the Weak Order Polytope

ABSTRACT. The study of ordering polytopes has been essential to the solution of various challenging combinatorial optimization problems. For instance, the incorporation of facet defining inequalities (FDIs) from these polytopes in branch-and-cut approaches represents among the most effective solution methodologies known to date for some of these problems. The weak order polytope, defined as the convex hull of the characteristic vectors of all binary orders on n alternatives that are reflexive, transitive, and complete, has been particularly important for tackling problems in computational social choice, preference aggregation, and comparative probability. For the most part, FDIs for the weak order polytope have been obtained through enumeration and through derivation from FDIs of other combinatorial polytopes. In this talk, we derive new classes of FDIs for the weak order polytope by utilizing the equivalent representation of a weak order as a ranking of n objects that allows ties and by grouping characteristic vectors that share certain ranking-based structures. Furthermore, we demonstrate that a number of FDIs previously obtained through enumeration are actually special cases of the ranking-based FDIs.

13:59
On the convexification of constrained quadratic optimization problems with indicator variables
PRESENTER: Linchuan Wei

ABSTRACT. Motivated by modern regression applications, in this paper, we study the convexification of quadratic optimization problems with indicator variables and combinatorial constraints on the indicators. Unlike most of the previous work on convexification of sparse regression problems, we simultaneously consider the nonlinear objective, indicator variables, and combinatorial constraints. We prove that for a separable quadratic objective function, the perspective reformulation is ideal independent from the constraints of the problem. In contrast, while rank-one relaxations cannot be strengthened by exploiting information from k-sparsity constraint for k>1, they can be improved for other constraints arising in inference problems with hierarchical structure or multi-collinearity.

14:21
Subadditive Duality for Conic Mixed-Integer Programs
PRESENTER: Diego Moran

ABSTRACT. In this talk, we show that the subadditive dual of a feasible conic mixed-integer program (MIP) is a strong dual whenever it is feasible. Moreover, we show that this dual feasibility condition is equivalent to feasibility of the conic dual of the continuous relaxation of the conic MIP. In addition, we prove that all known conditions and other `natural' conditions for strong duality, such as strict mixed-integer feasibility, boundedness of the feasible set or essentially strict feasibility, imply that the subadditive dual is feasible. As an intermediate result, we extend the so-called `finiteness property' from full-dimensional convex sets to intersections of full-dimensional convex sets and Dirichlet convex sets. This is joint work with Burak Kocuk (Sabanci University) and Gustavo Angulo (PUC).

13:15-14:45 Session MB4: Mathematical Programming for Data Science
Chair:
13:15
Lossless Compression of Deep Neural Networks
PRESENTER: Thiago Serra

ABSTRACT. Deep neural networks have been successful in many predictive modeling tasks, such as image and language recognition, where large neural networks are often used to obtain good accuracy. Consequently, it is challenging to deploy these networks under limited computational resources, such as in mobile devices. In this work, we introduce an algorithm that removes units and layers of a neural network while not changing the output that is produced, which thus implies a lossless compression. This algorithm, which we denote as LEO (Lossless Expressiveness Optimization), relies on Mixed-Integer Linear Programming (MILP) to identify Rectifier Linear Units (ReLUs) with linear behavior over the input domain. By using L1 regularization to induce such behavior, we can benefit from training over a larger architecture than we would later use in the environment where the trained neural network is deployed.

Cluster chair: Fatma Kilinc-Karzan Session chair: Dabeen Lee

13:37
Boolean Decision Rules via Column Generation
PRESENTER: Oktay Gunluk

ABSTRACT. In some machine learning applications, interpretable or explainable models for classification problems are preferred over potentially more accurate but less interpretable models. In this talk, we consider Boolean decision rule sets (equivalent to Boolean functions in disjunctive normal form) for binary classification. We define the complexity of a rule set to be the number of rules (clauses) plus the number of conditions (literals) across all clauses, and assume that simpler or less complex models are more interpretable. We discuss an integer programming formulation for such models that trades off classification accuracy against rule simplicity, and obtain high-quality classifiers of this type using column generation techniques. Compared to some recent alternatives, our algorithm dominates the accuracy-simplicity trade-off in 8 out of 16 datasets, and also produced the winning entry in the 2018 FICO explainable machine learning challenge. When compared to rule learning methods designed for accuracy, our algorithm seems to find significantly simpler solutions that are no less accurate. This is joint work with Sanjeeb Dash and Dennis Wei.

13:59
Outlier detection in time series via mixed-integer conic quadratic optimization

ABSTRACT. We consider the problem of estimating the true values of a Wiener process given noisy observations corrupted by outliers. The problem considered is closely related to the Trimmed Least Squares estimation problem, a robust estimation procedure well-studied from a statistical standpoint but poorly understood from an optimization perspective. In this paper we show how to improve existing mixed-integer quadratic optimization formulations for this problem. Specifically, we convexify the existing formulations via lifting, deriving new mixed-integer conic quadratic reformulations. The proposed reformulations are stronger and substantially faster when used with current mixed-integer optimization solvers. In our experiments, solution times are improved by at least two orders-of-magnitude.

13:15-14:45 Session MB5: Reformulations and Decompositions
Location: Furman Room
13:15
Exploiting symmetry in linear programming
PRESENTER: Ethan Deakins

ABSTRACT. Symmetry can be used in linear programming to "fold" an instance, creating a smaller, but equivalent LP. However, the solutions returned from this folded LP are in the barycenter of the original polyhedron. We show how to easily arrive at a vertex solution.

13:37
The Star Degree Centrality Problem: A Benders Decomposition Approach
PRESENTER: Mustafa Camur

ABSTRACT. We consider the problem of identifying the induced star with the largest open neighborhood in a graph. We first show that this problem, also known as the Star Degree Centrality (SDC) problem, is solvable in polynomial time in acyclic undirected graphs but remains NP-Complete on bipartite graphs. We then propose a new integer programming formulation, where the number of constraints is O(|V|) different than the existing formulation in literature. In addition, we propose a Benders Decomposition framework which is suitable for both the existing formulation and our new formulation and examine several acceleration techniques including introducing valid inequalities, using a single branch-and-bound tree, generating multi-cuts, employing heuristic approaches etc. The test cases are generated based on the following models: i) Barabási–Albert model, ii) Watts–Strogatz model, and iii) Erdős–Rényi model. The decomposition approach outperforms solving the problem via CPLEX in most of the instances produced in terms of both solution time and solution quality; This is especially true when the graph gets larger and denser. Lastly, we test the decomposition algorithm on the large-scale protein-protein interaction networks for which SDC is shown to be an efficient centrality metric.

13:59
An Integer Column Generation Procedure for Coordinating Product Transitions in Semiconductor Manufacturing using Combinatorial Auctions
PRESENTER: Ankit Bansal

ABSTRACT. Effective management of product transitions in semiconductor manufacturing requires decentralized coordination of manufacturing and product development units. We propose an Iterative Combinatorial Auction (ICA) framework using subadditive integer programming duality and column generation to generate near-optimal solutions maximizing corporate revenue over a finite planning horizon. Computational experiments show promising results.

14:21
The RIP Technique for Box Configuration for Birchbox
PRESENTER: Irv Lustig

ABSTRACT. Birchbox has been an innovator in the Internet box subscription business. A number of years ago, they formulated a mixed integer program (MIP) to solve the problem of determining the right number of configurations of boxes that would meet the needs of their diverse subscriber base. This MIP was taking close to 2 days to solve with Gurobi using close to 30 cores. We describe a reformulation of the problem and the development of the Reciprocating Integer Programming (RIP) Technique for addressing this problem. RIP leverages the power of the Gurobi mixed integer programming solver in multiple ways in a column generation scheme, which led to a reduction in solve times to 5 to 10 minutes on an 8 core laptop.

13:15-14:45 Session MB6: Network Optimization Models and Applications I
Location: Palmetto Room
13:15
Extreme-point Tabu Search Heuristic for Fixed-Charge Generalized Transshipment Problems

ABSTRACT. While researchers have studied generalized network flow problems extensively, the powerful addition of fixed charges on arcs has received scant attention. This work describes network-simplex-based algorithms that efficiently exploit the quasi-tree basis structure of the problem relaxations, proposes heuristics that utilize a candidate list, a tabu search with short and intermediate term memories to do the local search, a diversification approach to solve fixed-charge transportation problems, as well as a dynamic linearization of objective function extension for the transshipment fixed-charge generalized problems. Computational testings for both heuristics demonstrate their effectiveness in terms of speed and quality of solutions to these mixed-integer models. Comparisons with a state-of-the-art solver, CPLEX, show that the extreme-point search algorithm runs, on average, for transportation problems four orders of magnitude faster and produces integer solution values within 2.2% of the optimal solution reported by CPLEX; for transshipment problems, the heuristic solver ran 1,000 faster and the solution values were within 2.5% of the optimal reported by CPLEX.

13:37
Minimum blockers for the set covering problem
PRESENTER: Jose L. Walteros

ABSTRACT. Given a set $U$ of $n$ elements and a collection $S$ of $m$ sets whose union equals $U$, the set-covering problem is to identify the smallest sub-collection of $S$ whose union contains all the elements in $U$. Given a cost vector $c$ associated with blocking the selection of the subsets in $S$, we solve the problem of minimizing the cost of blocking some subsets of $S$ so that the size of the optimal set cover in the resulting problem is at least a given value $r$. We develop a general formulation for this problem and study the convex hull of feasible solutions. We run computational experiments over instances staged on graphs, and design and analyze various cutting plane strategies.

13:59
A Stochastic Programming Model with Endogenous and Exogenous Uncertainty for Reliable Network Design Under Random Disruption

ABSTRACT. Integrating infrastructure protection with network design model is efficient as these models provide strategic decisions to make a transportation network simultaneously efficient and reliable. We studied a combined network design and infrastructure protection problem subject to random disruptions where the protection is imperfect and multi-level and the effect of disruption is imperfect. In this research, we modeled a resource-constrained decision maker seeking to optimally allocate protection resources to the facilities, and construct links in the network to minimize the expected post-disruption transportation cost. We modeled the problem as a two-stage stochastic program with both endogenous and exogenous uncertainty: a facility's post-disruption capacity depends probabilistically on the protection decision, making the uncertainty endogenous, while the link construction decision directly affects the transportation decision. We implemented an accelerated L-shaped algorithm to solve the model and predictive modeling techniques to estimate the probability of a facility's post-disruption capacity for given protection and disruption intensity. Numerical results show that solution quality is sensitive to the number of protection levels modeled. Results demonstrate that the mean value model performs very poorly as the uncertainty increases. Results also indicate that the stochastic programming model is sensitive to the estimation error of the predictive modeling techniques.

13:15-14:45 Session MB7: Theory and Applications of Two-Stage Stochastic Programming
Chair:
Location: Alexander Room
13:15
Solving risk-averse mixed-integer stochastic programs using convex approximations

ABSTRACT. In stochastic decision-making environments, not only the expected outcome, but also the level of risk associated with a decision is often important for determining the best decision. We study a class of optimization models that explicitly incorporate this risk-aversion in the objective: two-stage risk-averse stochastic programs. In the first stage only distributional knowledge of the random parameters is known, while in the second stage the realizations are known and corrective actions can be made if any constraints are violated. We focus on problems with integer restrictions in the second stage. Such restrictions arise naturally in many practical settings (e.g., fixed batch sizes). The resulting mixed-integer second-stage problems are generally non-convex. As a result, our optimization problem is not convex and hence, very hard to solve. We propose to overcome this difficulty by means of a convex approximation of the original non-convex optimization model. Solving this convex approximation model is much easier, since we can make use of the rich toolbox of convex optimization. This gives us an approximate solution to the original model. Clearly, this approach likely leads to an approximation error. Therefore, we derive an upper bound on this error, serving as a performance guarantee for our solution method.

13:37
Stochastic Team Formation Problem
PRESENTER: Nihal Berktas

ABSTRACT. The success of a team depends on the quality of the communication among the team members as well as their technical capabilities. We study the Team Formation Problem (TFP) in which the quality of communication is taken into account using the proximity of the potential members in a social network. Given a set of required skills, TFP aims to construct a capable team that can communicate effectively. Acknowledging the fact that the communication measures may be subjected to uncertainty we study a two-stage stochastic TFP where a subset of communication costs are random and each has a discrete distribution. The first stage is a trial stage which serves as a tool to resolve some of the uncertainties depending on the decision. Therefore the uncertainty we consider is decision dependent or endogenous for which disjunctive programming models are used in the related literature. We give an alternative integer formulation with quadratic objective and develop a Benders decomposition based branch and cut algorithm on the linearized formulation. The number of subproblems to be solved is substantially less than number of scenarios in our algorithm. The linear relaxation of subproblems are strengthened with a partial application of reformulation-linearization technique.

13:59
Conic Formulations for a Mean-Risk Stochastic Mixed Integer Nonlinear Program with Applications to Network Protection
PRESENTER: Amy Burton

ABSTRACT. Two-stage stochastic programs are often used to model network resource allocation when there is uncertainty in future performance. We propose a mean-risk mixed integer nonlinear program for network protection that hedges against future extreme events. The model is implemented with coherent risk measures on a transportation network in which the first stage minimizes total retrofitting cost and the second minimizes expected future travel cost. We use perspective relaxations and conic quadratic constraints to derive a convex reformulation of the nonlinear, non-separable second stage. Benders Decomposition is used to solve the model and computational results are presented. Our method is applicable to a wide class of stochastic mixed integer nonlinear programs with pure binary first stage having SOS-1 variables and the objective function in the second stage being a sum of nonnegative univariate polynomials.

15:10-16:10 Session Plenary III: Do Simpler Models Exist and How Can We Find Them? (Speaker: Cynthia Rudin)

Abstract:

While the trend in machine learning has tended towards more complex hypothesis spaces, it is not clear that this extra complexity is always necessary or helpful for many domains. In particular, models and their predictions are often made easier to understand by adding interpretability constraints. These constraints shrink the hypothesis space; that is, they make the model simpler. Statistical learning theory suggests that generalization may be improved as a result as well. However, adding extra constraints can make optimization (exponentially) harder. For instance it is much easier in practice to create an accurate neural network than an accurate and sparse decision tree. We address the following question: Can we show that a simple-but-accurate machine learning model might exist for our problem, before actually finding it? If the answer is promising, it would then be worthwhile to solve the harder constrained optimization problem to find such a model. In this talk, I present an easy calculation to check for the possibility of a simpler model. This calculation indicates that simpler-but-accurate models do exist in practice more often than you might think. Time permitting, I will briefly overview several new methods for interpretable machine learning. These methods are for (i) sparse optimal decision trees, (ii) sparse linear integer models (also called medical scoring systems), and (iii) interpretable case-based reasoning in deep neural networks for computer vision.

This is joint work with Lesia Semenova, Ron Parr, Xiyang Hu, Chudi Zhong, Jimmy Lin, Margo Seltzer, Chaofan Chen, Oscar Li, Alina Barnett, Daniel Tao, Jonathan Su, and Berk Ustun.

Bio:

Cynthia Rudin is a professor of computer science, electrical and computer engineering, and statistical science at Duke University, and directs the Prediction Analysis Lab, whose main focus is in interpretable machine learning. She is also an associate director of the Statistical and Applied Mathematical Sciences Institute (SAMSI). Previously, Prof. Rudin held positions at MIT, Columbia, and NYU. She holds an undergraduate degree from the University at Buffalo, and a PhD from Princeton University. She is a three time winner of the INFORMS Innovative Applications in Analytics Award, was named as one of the “Top 40 Under 40” by Poets and Quants in 2015, and was named by Businessinsider.com as one of the 12 most impressive professors at MIT in 2015. She is past chair of both the INFORMS Data Mining Section and the Statistical Learning and Data Science section of the American Statistical Association. She has also served on committees for DARPA, the National Institute of Justice, and AAAI. She has served on three committees for the National Academies of Sciences, Engineering and Medicine, including the Committee on Applied and Theoretical Statistics, the Committee on Law and Justice, and the Committee on Analytic Research Foundations for the Next-Generation Electric Grid. She is a fellow of the American Statistical Association and a fellow of the Institute of Mathematical Statistics. She gave a Thomas Langford Lecturer at Duke University during the 2019-2020 academic year, and will be the Terng Lecturer at the Institute for Advanced Study in 2020.

Location: Gold Ballroom
16:30-18:00 Session MC1: Data-driven Optimization in Healthcare
Location: Gold Ballroom
16:30
Machine Learning to Address Hospital Capacity: Predicting Discharges and Identifying Barriers to Discharge
PRESENTER: Taghi Khaniyev

ABSTRACT. Hospital congestion is caused among other factors by an imbalance of the timing of patient admissions and discharges. We trained a neural network model to predict next-day’s surgical patient discharges based on previous patients' recovery care paths at a large academic medical center. Clinical data was included in the model based on whether it represented a clinical milestone of recovery or a clinical or administrative barrier to discharge. A clinical milestone of recovery was defined as an event that marks the patient’s progression toward recovery and discharge. A barrier to discharge was defined as an event that may postpone the patient’s discharge. The prediction model had an AUC of 0.89. Patients’ discharge predictions were categorized as (i) NO, (ii) MAYBE, or (iii) YES. Furthermore, for each patient, a list of existing barriers was identified, and an optimization model was developed to select the minimal subset of barriers that need to be resolved in order to bring a patient’s score above a pre-determined threshold. This minimal list was intended to serve as a prioritized action list for each patient.

16:52
Adaptive Pharmaceutical Supply Chain Design under Disruption and its Effects on Drug Shortages
PRESENTER: Emily Tucker

ABSTRACT. Drug shortages continue to be a public health crisis in the United States. There are hundreds of active shortages each year, and they are often caused by supply chain disruptions. Even when disruptions occur, the configuration of pharmaceutical supply chains are fairly rigid, and shortages last for 14 months on average. In this work, we present a multi-stage stochastic program to consider the effects of adaptable supply chains. The model optimizes a pharmaceutical company's supply chain configuration over time under uncertainty in the components. These include the active pharmaceutical ingredient suppliers, manufacturing plants, and manufacturing lines. We discuss model modifications to allow the application of the Stochastic Dual Dynamic Programming algorithm. We analyze how adaptability may affect oncology drug shortages under a variety of market conditions.

17:14
Data-driven optimization of supplemental screening usage in breast cancer detection

ABSTRACT. The imperfect nature of mammography led to increased consideration of supplemental ultrasound and Magnetic Resonance Imaging (MRI) screening for timely breast cancer detection, particularly for high-risk women including those with dense breasts. Over 35 million US women aged 40 or more have dense breasts. Breast density not only impairs screening accuracy, but also significantly increases risk of developing breast cancer. Therefore, millions of women with dense breasts face disproportionate risk of death from breast cancer. We formulate a personalized dynamic model to optimize the use of available supplemental screening modalities and scrutinize the screening recommendations of this optimization model in a detailed simulation model. Our data-driven model results suggest that supplemental ultrasound is used more often than MRI when maximizing quality-adjusted life expectancy (QALE) and results in fewer false positives, while MRI results in higher cancer detection rates and shorter time to detection. Supplemental screening compared to mammography-only screening detects about 0.9% more cancer cases at the in situ stage, shortens time until detection by 5.7%, reduces false-positives by 6.2%, and is associated with about incremental 1 month QALE gained per patient.

17:36
A distributionally robust optimization approach for outpatient colonoscopy scheduling
PRESENTER: Karmel Shehadeh

ABSTRACT. We consider the outpatient colonoscopy scheduling problem, recognizing the impact of pre- procedure bowel preparation (prep) quality on the variability in colonoscopy duration. Data from a large academic medical center indicates that colonoscopy durations are bimodal, i.e., depending on the prep quality they can follow two different probability distributions, one for those with adequate prep and the other for those with inadequate prep. We therefore define a distributionally robust outpatient colonoscopy scheduling (DROCS) problem that seeks optimal appointment sequence and schedule to minimize the worst-case weighted expected sum of patient waiting, provider idling, and provider overtime, where the worst-case is taken over an ambiguity set (a family of distributions) characterized through the known mean and support of the prep quality and durations. We derive an equivalent and tractable mixed- integer linear programming formulation to solve DROCS. Finally, we present a case study based on extensive numerical experiments in which we draw several managerial insights into colonoscopy scheduling.

16:30-18:00 Session MC2: Tutorial: A Unified Framework for Sequential Decisions under Uncertainty
16:30
Tutorial: A Unified Framework for Sequential Decisions under Uncertainty

ABSTRACT. Sequential decision problems are universal, spanning engineering, the sciences, e-commerce, transportation, finance, and health. These span dynamic resource allocation problems, control problems, stochastic search, simulation-optimization, as well as active learning problems, adversarial games and multiagent problems (any problem which involves decisions, information, decisions, information). These problems have been addressed by over 15 different communities (stochastic programming and dynamic programming are just two). I will present a universal framework that covers all of these problems, and all fields of research that have emerged to address them. The point of departure with most communities is the focus on optimizing over policies. I will then describe four (meta)classes of policies that I claim cover all methods that have ever been addressed in the academic literature, or industry practice. This framework increases the visibility of all fields, since none of the four classes is a panacea. Furthermore, the idea of designing hybrids is especially powerful. I will highlight a particularly powerful hybrid for multistage stochastic programs using a parameterized deterministic lookahead which represents an important emerging area of research for the optimization community.

16:30-18:00 Session MC3: Recent Advances in IPCO I
Location: Card Room
16:30
Inverse Mixed-Integer Optimization with Trust-Region Methods
PRESENTER: Ian Yihang Zhu

ABSTRACT. Inverse optimization describes the use of an observed solution to a (forward) optimization problem to infer unknown model parameters of this optimization problem. In this talk, we explore inverse mixed-integer optimization, in which the forward optimization problem includes discrete variables. We decompose the problem into a master and a subproblem, in which the subproblem iteratively adds cuts into the master problem. We present geometric insights of the inverse optimization problem which we leverage to design a novel set of trust-regions methods leading to significantly faster cut generation and convergence.

16:52
Dynamic Resource Allocation in the Cloud with Near-Optimal Efficiency

ABSTRACT. Efficient online resource allocation among distinct users is a critical task in the thriving area of cloud computing. This has driven renewed interest in allocation models where users have different demand patterns and service requirements. To ensure service requirements, cloud offerings often come with a service level agreement (SLA) between the provider and the users that specifies the amount of resource a user is entitled. In many settings, providers would like to operate resources at high utilization while simultaneously respecting individual SLAs. However, there is a trade-off between these two objectives. Focusing solely on satisfying SLAs can lead to poor utilization. Likewise, some users may end up deprived of resource if we concentrate entirely on ensuring high utilization. we introduce a model that captures basic properties of shared systems, including SLAs, highly limited feedback, and unpredictable input demand. In this framework, we develop a simple and practical algorithm for a single divisible resource (e.g. CPU) that achieves near-optimal performance on the above two objectives. First, it guarantees nearly optimal utilization of the resource even if compared to the omniscient offline dynamic optimum. Second, it simultaneously satisfies a constant fraction of the SLAs for each user up to a small error.

17:14
Neural network approximations in quadratic optimization
PRESENTER: Benjamin Beach

ABSTRACT. We present a technique for approximately solving nonconvex quadratic optimization problems, with provable guarantees on the approximation quality. The approach leverages an elegant approximation for univariate quadratic functions due to Yarotsky (2016), formulating this (simple) deep neural network using mixed-integer programming. Notably, the number of constraints, binary variables, and auxiliary continuous variables used in this formulation grows logarithmically in the approximation error. Combining this with a diagonal perturbation technique to convert a nonseparable quadratic function into a separable one, we present a mixed-integer convex quadratic relaxation for nonconvex quadratic optimization problems. We study the facial structure of our neural network approximation, and present computational results on box-constrained quadratic programming problems, showing that our technique can outperform existing approaches in terms of solve time and dual bounds.

17:36
Characterizing Linearizable QAPs by the Level-1 RLT
PRESENTER: Lucas Waddell

ABSTRACT. The quadratic assignment problem (QAP) is an extremely challenging NP-hard combinatorial optimization program. Due to its difficulty, a research emphasis has been to identify special instances that are polynomially solvable. Included within this emphasis are instances which are linearizable; that is, which can be equivalently rewritten as a linear assignment problem so that the objective function value is preserved at all feasible solutions. Various sufficient conditions for identifying linearizable instances are known, and these conditions have been explained in terms of a relaxed version of a level-1 reformulation-linearization-technique (RLT) form. Other work provides a set of conditions that is both necessary and sufficient for identifying such instances. Our contribution is to show that a relaxed level-1 RLT form possesses a special constraint structure that promotes, from a polyhedral perspective, a different necessary and sufficient condition. Specifically, an instance of the QAP is shown to be linearizable if and only if this relaxed form is bounded. In addition to providing a different perspective on the notion of being linearizable, this study shows that every linearizable instance of the QAP has an optimal solution to the (polynomial-sized) level-1 RLT linear program that is binary. The converse, however, is not true.

16:30-18:00 Session MC4: Recent advances in nonlinear and non-convex optimization
16:30
An MINLP approach to bounding probability with applications to Network routing
PRESENTER: Ashish Chandra

ABSTRACT. We consider adjustable robust optimization problems (AROP), where a subset of the variables can adjust subsequent to revelation of uncertainties. We are interested in the probability of the scenarios where the objective does not meet a pre-specified threshold. The uncertainty set is specified as the vertices of a 0-1 polytope. We specialize our results to network routing problems, for which the corresponding AROP is NP-Hard.

The AROP consists of here-and-now decisions and also adaptive decisions that are adjustable upon the revelation of uncertainties. Given the here-and-now decisions the feasibility problem is to decide if there exists an adaptive decision which satisfies the constraints in the AROP for each element in the uncertainty set. We focus on finding the probability of elements in the uncertainty set for which no feasible adaptive decisions exists. We observe that although our modified version of RLT performed well for the worst-case problem, but it gave trivial overestimates to this probability when used along with Markov inequality. We discuss various techniques to strengthen these bounds by using concave envelope constructions and polynomial time counting algorithm for estimating Knapsack solutions.

Session Chair: Andres Gomez

16:52
On SDP formulations for quadratic optimization with indicator variables
PRESENTER: Shaoning Han

ABSTRACT. In this paper, we study the quadratic optimization with indicator variables. We focus on the case where the quadratic function is convex and the continuous variables are nonnegative. We first prove the standard SDP formulation is equivalent to the optimal perspective relaxation of the problem. We then study the convex hull of the mixed-integer set concerned, and based on it, we propose a novel extended formulation using SDP techniques. We show the extended formulation proposed is stronger than many convex relaxations in literature.

17:14
Computing the Projection onto the Degenerate Doubly Nonnegative Cone
PRESENTER: Ying Cui

ABSTRACT. (The session chair is Andres Gomez) The doubly nonnegative (DNN) cone, being the set of all positive semidefinite matrices whose elements are nonnegative, is a popular approximation of the computationally intractable copositive cone. The major difficulty for implementing a Newton-type method to compute the projection of a given large scale matrix onto the DNN cone lies in the possible failure of the constraint nondegeneracy, a generalization of the linear independence constraint qualification for nonlinear programming. Such a failure results in the singularity of the nonsmooth equation representing the Karush-Kuhn-Tucker condition that prevents the semismooth Newton-CG method to solve it with a desirable convergence rate. We overcome the aforementioned difficulty by solving a sequence of well-conditioned nonsmooth equations generated by the augmented Lagrangian method (ALM) instead of solving one above mentioned singular equation. By leveraging on the metric subregularity of the normal cone associated with the positive semidefinite cone, we derive sufficient conditions to ensure the dual quadratic growth condition of the underlying problem, which further leads to the asymptotic superlinear convergence of the proposed ALM. Numerical results using from the semidefinite programming library are presented to demonstrate the efficiency of the algorithm for computing the DNN projection to a very high accuracy.

16:30-18:00 Session MC5: Recent Developments in Optimization for Data Science
Location: Furman Room
16:30
Electrical Flows over Spanning Trees
PRESENTER: Swati Gupta

ABSTRACT. Electricity distribution and transmission networks have been a rich source of non-convex problems with combinatorial structure that have helped discover limitations of, as well as advance, the theory of discrete and continuous optimization. In this work, we consider the network reconfiguration problem that seeks to find a rooted tree T such that the energy of the (unique) feasible electrical flow over T is minimized. The tree requirement on the support of the flow is motivated by operational constraints for safety in electricity distribution networks. The bulk of existing results on the structure of electrical flows, Laplacian solvers, bicriteria tree approximations, etc., do not easily give guarantees for this problem, while many heuristic methods have been developed in the power systems community as early as 1989. Our main contribution is to give the first provable approximation guarantees for the network reconfiguration problem. We provide novel lower bounds and corresponding approximation factors for various settings ranging from $\min\{\bigO(m-n), \bigO(n)\}$ for general graphs, to $\bigO(\sqrt{n})$ over grids with uniform resistances on edges, and $\bigO(1)$ for grids with uniform edge resistances and demands. To obtain the result for general graphs, we give a new method for (approximate) spectral graph sparsification, which may be of independent interest. We show compelling computational results using these ideas.

16:52
Categorical Feature Compression via Submodular Optimization

ABSTRACT. We design a highly-scalable vocabulary compression algorithm with 1-1/e approximation guarantee for the problem of maximizing the mutual information between the compressed categorical feature and the target binary labels. This is based on an interesting re-parameterization which turns the objective into a submodular function. Beside an efficient O(n \log n)-time algorithm, we design a distributed implementation with the same approximation guarantee in O(\log n) rounds over k machines, each consuming O(n/k) space. Finally we present results of our benchmark using the Criteo dataset.

17:14
Low-rank matrix recovery with composite optimization: good conditioning and rapid convergence

ABSTRACT. The task of recovering a low-rank matrix from its noisy linear measurements plays a central role in computational science. Smooth formulations of the problem often exhibit an undesirable phenomenon: the condition number, classically defined, scales poorly with the dimension of the ambient space. In contrast, we here show that in a variety of concrete circumstances, nonsmooth penalty formulations do not suffer from the same type of ill-conditioning. Consequently, standard algorithms for nonsmooth optimization, such as subgradient and prox-linear methods, converge at a rapid dimension-independent rate when initialized within constant relative error of the solution. Our framework subsumes such important computational tasks as phase retrieval, blind deconvolution, quadratic sensing, matrix completion, and robust PCA. Numerical experiments on these problems illustrate the benefits of the proposed approach. This is join work with V. Charisopoulos, Y. Chen, D. Davis, L. Ding, D. Drusvyatskiy.

17:36
Sparse Regression with Interactions: A Scalable Integer Programming Approach
PRESENTER: Hussein Hazimeh

ABSTRACT. We consider the linear regression problem where the main features are augmented with pairwise interactions. This problem can be enhanced by imposing sparsity under the strong hierarchy (SH) constraint, which states that an interaction coefficient is non-zero only if its associated main coefficients are non-zero. Learning a sparse model under SH can be expressed as a Mixed Integer Quadratic Program (MIQP). However, this problem is very difficult to scale using current MIQP solvers. To address this issue, we develop a specialized non-linear branch-and-bound (BnB) algorithm, which employs proximal gradient methods to solve the node relaxations and obtain upper bounds. We introduce new screening methods, active-set updates, and reformulations that exploit sparsity to reduce the update complexity of the proximal methods. Our experiments indicate that the proposed BnB is over three orders of magnitude faster than Gurobi and MOSEK, solving instances with 50 million interactions to optimality in minutes. Moreover, the BnB shows considerable statistical gains when compared to the state-of-the-art convex relaxations in the literature. This is a joint work with Rahul Mazumder.

16:30-18:00 Session MC6: Network Interdiction
Location: Palmetto Room
16:30
A Progressive approximation approach for the exact solution of sparse large-scale binary interdiction games
PRESENTER: Jorge A. Sefair

ABSTRACT. We present a progressive approximation algorithm for the exact solution of several classes of interdiction games in which two non-cooperative players (namely an attacker and a follower) interact sequentially. The follower must solve an optimization problem that has been previously perturbed by means of a series of attacking actions led by the attacker. These attacking actions aim at augmenting the cost of the decision variables of the follower's optimization problem. The objective, from the attacker's viewpoint, is that of choosing an attacking strategy that reduces as much as possible the quality of the optimal solution attainable by the follower. The progressive approximation mechanism consists in the iterative solution of a relaxed interdiction problem ---in which the follower actions are restricted to a subset of the whole solution space---, and of a pricing subproblem invoked with the objective of proving the optimality of the attacking strategy. This scheme is especially useful when the optimal solutions to the follower's subproblem intersect with the decision space of the attacker only in a small number of decision variables. Under this assumption, the progressive approximation method can scale and solve interdiction games otherwise untractable for classical methods. We illustrate the efficiency of this framework on shortest path, 0-1 knapsack and facility location interdiction games.

16:52
Estimating Wireless Mesh Network Vulnerability: A Search for the “Best” Interference Model

ABSTRACT. Analyzing the vulnerability of a wireless network is crucial to protect the network from attackers. This paper studies the problem of placing a set of jammers in 3-D space to minimize the throughput of a wireless communication network. The main goal of this paper is to study the effects of jamming under the following models of interference: physical, capture, protocol, and interference range. This paper presents a mixed-integer programming model and a branch-and-cut procedure for the jammer location problem under several models of interference. We implement multiple valid inequalities including clique and odd-cycle inequalities to speed-up the branch-and-cut algorithm. Numerical experiments demonstrate the effect of different interference models on runtime and solution tractability. To obtain the most beneficial strategies for designing a network to be jamming-resistant, we investigate the effect of changing the density of nodes, the communication range, the interference range, and the number of communication channels. We also investigate the effect of changing the number of potential jammer locations, the attacker’s budget, and the jamming range to find the most beneficial strategies from the attacker’s perspective that maximizes the impact of a jamming attack.

17:14
Network Interdiction with Asymmetric Cost Uncertainty
PRESENTER: Di Nguyen

ABSTRACT. We study a network game between a leader and a follower having opposing objectives. The follower aims to travel from source to sink incurring the least amount of cost possible. The leader’s goal is to increase the expected value of the shortest-path cost. In this game, the leader will inflict attacks on some arcs in the network, incurring extra costs on a set of arcs. In this problem, the leader only knows the cost distributions, not each arc’s exact value. The follower, after observing the attacks made on the network, obtains perfect information and traverses using a least cost path.

16:30-18:00 Session MC7: Advances in Interval Linear Programming
Location: Alexander Room
16:30
Interval Linear Programming with an Additional Function of Interest: Computational Complexity, Reformulations, and Heuristics
PRESENTER: Mohsen Mohammadi

ABSTRACT. Interval linear programming is an effective tool to deal with linear programs under interval-valued uncertainty. Determining the set of all optimal solutions of a linear program with interval data is known to be one of the most challenging problems in the topic. We argue, in this context, that finding the range of an additional function of interest over the set of all optimal solutions of an interval linear program can be a problem of interest in practice. Here, after formally defining the problem, we investigate its computational complexity. We also study different reformulations of the problem and discuss pros and cons of each. We eventually offer some heuristics to solve the problem, and test them on a set of randomly generated instances.

16:52
The Outcome Range Problem in Interval Linear Programming
PRESENTER: Mohsen Mohammadi

ABSTRACT. We study linear programming problems where all or some of input data vary within given real-valued intervals. Under this assumption, finding all optimal solutions obtaining from all possible realizations of interval data has been a problem of interest in the past. In this context, we introduce a new problem, namely the outcome range problem, which consists of computing the best and the worst values of a given (additional) linear function over the set of all optimal solutions of a linear program with interval data. Our proposed problem is relevant in practice and challenging from a theoretical perspective. Specifically, the problem aims at quantifying unintended consequences of an optimal decision in an uncertain environment, providing additional important information on a system for decision makers who are observing it from outside. We show this through a case study related to a health care problem. Here, after formally defining the problem, we investigate its computational complexity, and develop two heuristics methods to solve it. We finally test them on randomly generated instances and on a real case.

17:14
Control and tolerance two-sided interval linear programming problems
PRESENTER: Artur Gorka

ABSTRACT. We investigate the control and tolerance solutions of a twosided interval linear system. These two types of solutions can be represented in a form of the inequalities of the interval boundaries depending on the signs of the solutions. Therefore, control and tolerance two-sided interval linear programs can be transformed to linear programming problems. Moreover, the semantics of these two types of solutions are different, even though we can algebraically interchange the role of the interval information. Hence, the feasible solution set of a two-sided interval linear system can be represented by either control or tolerance solution set but the semantics will clarify the real meaning of the problem.