IOS2022: INFORMS OPTIMIZATION SOCIETY CONFERENCE 2022
PROGRAM FOR TUESDAY, MARCH 15TH
Days:
previous day
all days

View: session overviewtalk overview

08:30-10:00 Session TA1: Theory and Algorithms for Stochastic and Robust Optimization
Chair:
08:30
Approximating Optimal Solutions to Biconvex Parametric Programs
PRESENTER: Andrew Pangia

ABSTRACT. Convex programming has been a research topic for a long time, both theoretically and algorithmically. Frequently, these programs lack complete data or contain rapidly shifting data. In response, we consider solving parametric programs, which allow for fast evaluation of the optimal solutions once the data is known. It has been established that, when the objective and constraint functions are convex in both variables and parameters, the optimal solutions can be estimated via linear interpolation. We extend the linear interpolation approach to more general parametric programs, which opens up broader applications and new research opportunities.

08:52
A Lagrangian Dual Method for Two-Stage Robust Optimization with Binary Uncertainties

ABSTRACT. This paper presents a new exact method to calculate worst-case parameter realizations in two-stage robust optimization problems with categorical or binary-valued uncertain data. Traditional exact algorithms for these problems, notably Benders decomposition and column-and-constraint generation, compute worst-case parameter realizations by solving mixed-integer bilinear optimization subproblems. However, their numerical solution can be computationally expensive not only due to their resulting large size after reformulating the bilinear terms, but also because decision-independent bounds on their variables are typically unknown. We propose an alternative Lagrangian dual method that circumvents these difficulties and is readily integrated in either algorithm. We specialize the method to problems where the binary parameters switch on or off constraints as these are commonly encountered in applications, and discuss extensions to problems that lack relatively complete recourse and to those with integer recourse. Numerical experiments provide evidence of significant computational improvements over existing methods.

09:14
Distributionally Robust Optimization with Decision-Dependent Information Discovery
PRESENTER: Qing Jin

ABSTRACT. We study a two-stage distributionally robust optimization problem with decision-dependent information discovery (DRO-DDID), where the revelation of the uncertain parameters is not free and depends on the decision maker's actions. The setting finds many important applications in project selection, clinical trials, recruitments, etc. However, despite the problem's wide applicability, it has not been well studied. We propose a novel framework of DRO-DDID that fills the gap in the literature. We adopt the popular K-adaptability scheme, which chooses K candidate policies here and now and implements the best policy after the selected uncertain parameters have been observed. We then present a decomposition algorithm that solves the K-adaptable formulation exactly. The algorithm iteratively finds the current optimal information discovery solution, evaluates the objective value with the optimal solution, and adds cuts. For the evaluation problem, we develop a branch-and-cut algorithm that is provably convergent. Finally, we showcase the effectiveness of our model framework and algorithm on the R&D project portfolio optimization problem and the best box problem.

09:36
On the Power of Static Assignment Policies for Robust Facility Location Problems
PRESENTER: Omar El Housni

ABSTRACT. We consider a two-stage robust facility location problem on a metric under an uncertain demand. The decision-maker needs to decide on the (integral) units of supply for each facility in the first stage to satisfy an uncertain second-stage demand, such that the sum of first stage supply cost and the worst-case cost of satisfying the second-stage demand over all scenarios is minimized. The second-stage decisions are only assignment decisions without the possibility of adding recourse supply capacity. This makes our model different from existing work on two-stage robust facility location and set covering problems. We consider an implicit model of uncertainty with an exponential number of demand scenarios specified by an upper bound k on the number of second-stage clients. In an optimal solution, the second-stage assignment decisions depend on the scenario; surprisingly, we show that restricting to a fixed (static) fractional assignment for each potential client irrespective of the scenario gives us an O(log k/log log k)-approximation for the problem. Moreover, the best such static assignment can be computed efficiently giving us the desired guarantee.

08:30-10:00 Session TA2: Nonlinear Composite and Constrained Optimization
Location: Gold Ballroom
08:30
A New Randomized Primal-Dual Algorithm for Convex Optimization with Optimal Last Iterate Rates
PRESENTER: Deyi Liu

ABSTRACT. We develop a novel unified randomized block-coordinate primal-dual algorithm to solve a class of nonsmooth constrained convex optimization problems, which covers different existing variants and model settings from the literature. We prove that our algorithm achieves optimal O(n/k) and O(n^2/k^2) convergence rates (up to a constant factor) in two cases: general convexity and strong convexity, respectively, where k is the iteration counter and n is the number of block-coordinates. Our convergence rates are obtained through three criteria: primal objective residual and primal feasibility violation, dual objective residual, and primal-dual expected gap. Moreover, our rates for the primal problem are on the last iterate sequence. Our dual convergence guarantee requires additionally a Lipschitz continuity assumption. We specify our algorithm to handle two important special cases, where our rates are still applied. Finally, we verify our algorithm on two well-studied numerical examples and compare it with two existing methods. Our results show that the proposed method has encouraging performance on different experiments.

08:52
Decentralized Composite Optimization with Compression
PRESENTER: Yao Li

ABSTRACT. Decentralized optimization has extensive applications in large-scale machine learning problems. To tackle the communication bottleneck in practice, some existing distributed algorithms are adapted with compression techniques and use inexact less-bit information in the iterations. Focusing on the problem with a potentially non-smooth regularizer, we propose a stochastic decentralized algorithm with the communicated information compressed by unbiased stochastic operator of arbitrary precision, where the strong convexity is imposed on the smooth component of the objective function. The convergence is guaranteed to be linear up to the neighborhood of the solution and is further accelerated to achieve the exact solution by two well-known variance reduction schemes, Loopless-SVRG and SAGA, for the problems with finite-sum structure. Detailed complexity is investigated to show the effect of the inexact communication on the compressed algorithm compared to the vanilla one.

09:14
Towards Quantized Model Parallelism for Graph-Augmented MLPs Based on Gradient-Free ADMM Framework
PRESENTER: Junxiang Wang

ABSTRACT. While Graph Neural Networks (GNNs) are popular in the deep learning community, it suffers from several challenges including over-smoothing, exploding, or vanishing gradients in training. Recently, a series of models have attempted at relieving these issues by first augmenting the node features and then imposing node-wise functions based on Multi-Layer Perceptron (MLP), which are widely referred to as GA-MLP models. However, while GA-MLP models enjoy deeper architectures for better accuracy, their efficiency largely deteriorates. Moreover, popular acceleration techniques such as stochastic-version or data-parallelism cannot be effectively applied due to the dependency among samples (i.e., nodes) in graphs. To address these issues, in this paper, instead of data parallelism, we propose a parallel graph deep learning Alternating Direction Method of Multipliers (pdADMM-G) framework to achieve model parallelism: parameters in each layer of GA-MLP models can be updated in parallel. The extended pdADMM-G-Q algorithm reduces communication costs by utilizing the quantization technique. Theoretical convergence to a stationary point is provided with a sublinear convergence rate o(1/k). Extensive experiments in six benchmark datasets demonstrate that the proposed algorithms lead to high speedup, and outperform all the existing state-of-the-art comparison methods. Moreover, the pdADMM-G-Q algorithm reduces communication overheads by 25% without loss of performance.

09:36
Backtracking line search for nonlinear composite problems and applications

ABSTRACT. This paper exploits the duality frameworks in convex optimization for finding descent directions for nonlinear composite problems. For this purpose, it proposes a procedure based on the Fenchel-Moreau-Rockafellar duality for finding a descent direction. By invoking this procedure into the line search methods, one obtains an efficient method for solving nonlinear composite problems. The resulting algorithm brings a powerful combination of the Prox-Linear, the Dual Fast Iterative Shrinkage/Tresholding Algorithm (Dual~FISTA) and the weak Wolfe line search as illustrated by several representative examples. Then, this paper extends the use of backtracking by proposing an Augmented Lagrangian Method (ALM) for minimizing a smooth function subject to nonlinear equality constraints. Under the uniform regularity condition, this algorithm provides a proper method for updating the dual step size and penalty parameter without involving a dedicated solver to approximately solve an inner unconstrained subproblem.

08:30-10:00 Session TA3: Integer Programming for Machine Learning II
Location: Card Room
08:30
Sparse PCA: A New Scalable Estimator Based On Integer Programming
PRESENTER: Kayhan Behdin

ABSTRACT. We consider the Sparse Principal Component Analysis (SPCA) problem under the well-known spiked covariance model. Recent work has shown that the SPCA problem can be reformulated as a Mixed Integer Program (MIP) and can be solved to global optimality, leading to estimators that are known to enjoy optimal statistical properties. However, current MIP algorithms for SPCA are unable to scale beyond instances with a thousand features or so. In this work, we propose a new estimator for SPCA which can be formulated as a MIP. Different from earlier work, we make use of the underlying spiked covariance model and properties of the multivariate Gaussian distribution to arrive at our estimator. We establish statistical guarantees for our proposed estimator in terms of estimation error and support recovery. We propose a custom algorithm to solve the MIP which is significantly more scalable than off-the-shelf solvers; and demonstrate that our approach can be much more computationally attractive compared to earlier exact MIP-based approaches for the SPCA problem. Our numerical experiments on synthetic and real datasets show that our algorithms can address problems with up to $20,000$ features in minutes; and generally result in favorable statistical properties compared to existing popular approaches for SPCA.

08:52
Beyond Symmetry: Best Submatrix Selection for the Sparse Truncated SVD
PRESENTER: Yongchun Li

ABSTRACT. Truncated singular value decomposition (SVD), also known as the best low-rank matrix approximation, has been applied to many domains such as biology, healthcare, and others, where high-dimensional datasets are prevalent. To enhance the interpretability of the truncated SVD, sparse SVD (SSVD) is introduced to select a few rows and columns of the original matrix along with the low rank approximation. Different from literature, this paper presents a novel SSVD formulation that can select the best submatrix precisely up to a given size to maximize its truncated Ky-Fan norm. The fact that the SSVD problem is NP-hard motivates us to study effective algorithms with provable performance guarantees. We prove the approximation ratios for all the approximation algorithms and show that all the ratios are tight, i.e., we demonstrate that these approximation ratios are unimprovable. Finally, our numerical study demonstrates the high solution quality and computational efficiency of the proposed algorithms.

09:14
Integer Programming Approaches to Subspace Clustering with Missing Data
PRESENTER: Akhilesh Soni

ABSTRACT. In the Subspace Clustering with Missing Data (SCMD) problem, we are given a collection of points where each of the data points is observed only on a subset of its coordinates, and the points are assumed to lie in the union of a small number of low-dimensional subspaces. The goal is to identify clusters of vectors belonging to the same subspace and learn the corresponding basis. We propose a novel mixed integer nonlinear programming framework for SCMD. The formulation is based on dynamically determining a set of candidate subspaces and optimally assigning points to selected subspaces. The problem structure is similar to the classical facility-location problem. We propose a Benders' decomposition approach for solving the LP relaxation of the formulation and integrate it with a column-generation approach for generating new candidate subspaces. The pricing problem to identify a basis matrix is a highly nonconvex unconstrained nonlinear program that we solve approximately via a gradient descent method. The framework is also capable of learning the number and rank of underlying low dimensional subspaces. An empirical study shows that our framework achieves better clustering accuracy than state-of-the-art methods when the data is high rank or the percentage of missing data is high.

09:36
Interpretable Clustering via Multi-Polytope Machines
PRESENTER: Connor Lawless

ABSTRACT. Clustering is a popular unsupervised learning tool often used to discover groups within a larger population such as customer segments, or patient subtypes. However, despite its use as a tool for subgroup discovery and description few state-of-the-art algorithms provide any rationale or description behind the clusters found. We propose a novel approach for interpretable clustering that both clusters data points and constructs polytopes around the discovered clusters to explain them. Our framework allows for additional constraints on the polytopes including ensuring that the hyperplanes constructing the polytope are axis-parallel or sparse with integer coefficients. We formulate the problem of constructing clusters via polytopes as a Mixed-Integer Non-Linear Program (MINLP). To solve our formulation we propose a two phase approach where we first initialize clusters and polytopes using alternating minimization, and then use coordinate descent to boost clustering performance. We benchmark our approach on a suite of synthetic and real world clustering problems, where our algorithm outperforms state of the art interpretable and non-interpretable clustering algorithms.

08:30-10:00 Session TA4: Submodularity and Aggregation
Chair:
08:30
Submodular Information Measures in Machine Learning

ABSTRACT. In this talk, we will introduce a new class of Information Measures, called the "Submodular Information Measures". This rich class of functions includes submodular mutual information, submodular conditional gain, submodular multi-set mutual information, directed submodular mutual information, and combinatorial independence. (1) rich theoretical properties and instantiations of these submodular information measures; (2) optimization algorithms, approximation bounds, and hardness results of the associated optimization problems; (3) applications of the submodular information measures in machine learning.

We will specifically, study the applications of these information measures to data summarization (e.g., video, image collection, and text summarization) particularly in the presence of queries and privacy/irrelevance constraints, active learning in realistic scenarios like imbalance, OOD data, and redundancy, clustering, diversified partitioning, and data subset selection under distribution shift.

This is based on work that appeared in Algorithmic Learning Theory (ALT) 2021, Transactions of Information Theory 2021, NeurIPS 2021, and AAAI 2022. This is also based on joint work with several colleagues and students including Suraj Kothawade, Vishal Kaushal, Jeff Bilmes, Ganesh Ramakrishnan, and Himanshu Asnani.

08:52
On constrained mixed-integer DR-submodular minimization
PRESENTER: Qimeng Yu

ABSTRACT. Submodular set functions play an important role in integer programming and combinatorial optimization. Increasingly, applications in resource allocation and machine learning have led to generalized submodular functions defined over general integer or continuous domains. In our study, we focus on the notion of Diminishing Returns (DR) submodularity and the problem of minimizing DR-submodular functions over mixed-integer feasible sets defined by box constraints and monotonicity constraints. We propose multiple classes of valid inequalities and further show that these inequalities, along with trivial constraints, fully describe the convex hull of the epigraph of DR-submodular functions under the aforementioned constraints. Our results hold in the original decision space, which avoids pseudo-polynomial expansions that are used to handle general integer variables in the existing literature.

09:14
Exact Formulation and Partitioning of the General Rank Aggregation Problem
PRESENTER: Sina Akbari

ABSTRACT. Rank Aggregation has ubiquitous applications in operations research, artificial intelligence, computational social choice, and various other fields. Although most attention has focused on the variant of this problem induced by the Kemeny-Snell distance, other robust rank aggregation problems have been proposed. This work delves into the rank aggregation problem under the generalized Kendall-tau distance, which contains Kemeny Aggregation as a special case. First, it introduces a non-linear binary programming formulation for the generalized problem and derives an equivalent linear mixed-integer programming formulation. Second, it introduces a social choice property and algorithm for enabling the partitioning of certain large instances. This property is shown to enclose existing variations of the Condorcet Criterion as special cases, thereby expanding this seminal social choice concept beyond Kemeny Aggregation for the first time. Through this specialized partitioning approach, many instances of this NP-hard problem can be decomposed into smaller subproblems, while guaranteeing that solving them independently still produces the optimal solution to the original problem. Finally, this work derives new theoretical insights into the effects of the generalized Kendall-tau distance penalty parameter on the optimal ranking and on the proposed social choice property.

09:36
Approximate Condorcet Partitioning: Solving Very Large Rank Aggregation Problems at Scale
PRESENTER: Sina Akbari

ABSTRACT. Rank aggregation has ubiquitous applications in computer science, operations research, and various other fields. Most attention on this problem has focused on an NP-hard variant known as Kemeny rank aggregation, for which solution approaches with provable guarantees that can handle difficult high-dimensional instances remain elusive. This work introduces exact and approximate methodologies inspired by the social choice foundations of the problem, namely the Condorcet Criterion. We formalize the concept of the finest-Condorcet Partition for rankings that may contain ties and specify its required conditions. We prove that this partition is unique and devise an efficient algorithm to obtain it. To deal with instances where it does not yield many subsets, we propose Approximate Condorcet Partitioning (ACP), with which larger subsets can be further broken down and more easily solved, while still providing provable guarantees. Although ACP approximation factors are instance-specific, their values were lower than those offered by all constant-factor approximation algorithms known to date, for all 112 benchmark instances tested herein. Solving the subproblems from each subset of the ACP partition with an exact method and concatenating their solutions yielded near-optimal solutions to the full problem, altogether taking less than 90 seconds for each of the tested instances with up to 2,800 items.

08:30-10:00 Session TA5: Location and Allocation Problems
Location: Furman Room
08:30
A Risk-Averse Multi-stage Stochastic Optimization Approach to the EAB Epidemic Problem in the US Forests
PRESENTER: Sabah Bushaj

ABSTRACT. Emerald Ash Borer (EAB), a wood-boring insect native to Asia, is a silent killer of ash trees with an alarming fatality rate of nearly 100%. We present a risk-averse multi-stage, stochastic, mixed-integer programming (MSS-MIP) formulation and a scenario dominance decomposition and cutting plane algorithm to tackle the difficulty of solving this NP-Hard problem. We first demonstrate computational results on the performance of our algorithm. We then present decision strategies on allocating resources for the surveillance of the ash population and subsequent treatment and removal of infested trees over space and time with an application to the case in the State of New Jersey. This is joint work with the USDA US Forest Service.

08:52
A bi-objective covering tour problem to locate ballot drop boxes
PRESENTER: Adam Schmidt

ABSTRACT. We introduce a bi-objective integer linear optimization model to determine locations for ballot drop boxes that addresses cost, equity, and risk through the constraints and objectives. The new model is an extension of the covering tour problem that combines the set covering problem (SCP) and traveling salesman problem (TSP). In the proposed model, selected ballot drop box locations must provide a minimal level of access to voter locations (SCP) while the cost of a tour over the selected locations (TSP) contribute to the cost of the drop box system. The first objective of our model is to minimize the cost of locating, securing, and managing the drop box system. The second objective is to maximize equity of access to the voting system. We introduce a function based on the discrete choice literature to model access to both the drop box system and other components of the voting infrastructure. We introduce a local-search heuristic solution method to generate quality, feasible solutions to the drop box location problem. A case study based on the voting system in Milwaukee, Wisconsin demonstrates the value of the optimization model.

09:14
Routing and resource allocation in non-profit settings with equity and efficiency measures under demand uncertainty
PRESENTER: Faisal Alkaabneh

ABSTRACT. Motivated by food distribution operations for non-profit organizations, we study a variant of the stochastic routing-allocation problem under demand uncertainty, in which one decides the assignment of trucks for demand nodes, the sequence of demand nodes to visit (i.e., truck route), and the allocation of food supply to each demand node. We propose three stochastic mixed-integer programming (SMIP) models representing different performance measures important to food banks, namely maximizing efficiency, maximizing equity, and maximizing efficiency and equity simultaneously. To solve practical large-scale instances, we develop an original matheuristic based on adaptive large-scale neighborhood search. Using real-world data based on real-life instances, we conduct an extensive numerical experiment to assess the computational performance of our approach and derive insights relevant to food banks. The proposed matheuristic produces high-quality solutions quickly with an optimality gap never exceeding 4.11% on tested instances. We also demonstrate the performance of the three models in terms of service levels, food waste, and equity.

09:36
Optimization of worker schedules to maintenance tasks
PRESENTER: Anlan Dong

ABSTRACT. Electric utility companies maintain equipment across a large number of substations throughout their service regions. Each piece of equipment requires regularly-scheduled, routine maintenance, and some equipment will require further as-needed corrective maintenance. Any given maintenance task will require specific skills of the workers assigned to complete it, and the utility company employs a staff with various levels of training; therefore, all workers are not equally capable of performing a given maintenance task. As a result, scheduling workers to maintenance tasks is a time-intensive job that is currently performed manually on a weekly basis. In the partner utility company, this requires the concurrent assignment of 30-55 workers to maintenance tasks from a backlog of 2,000-3,000 work orders.

This project develops and demonstrates an integer programming based formulation that optimally assigns specific workers to specific maintenance tasks at specific times. The optimization formulation considers the following factors via hard and soft constraints: worker availability, worker skill levels in 57 proficiency areas, number of concurrent workers required per task, number of hours needed per task, task priority level, task deadlines, and equipment outage schedules. The weekly optimal scheduling problem is successfully solved for 8-hour weekday shifts with 14 real-world data sets.

08:30-10:00 Session TA6: Logistics and Districting
Location: Palmetto Room
08:30
Edge-based Districting without Predetermined Centers: Model and Network Insights
PRESENTER: Zeyad Kassem

ABSTRACT. We introduce an edge-based districting optimization model with no predetermined centers to partition a road network into a given number of compact, contiguous, and balanced districts. The model is suited to logistics applications. The model is a binary programming model with cut set-based connectivity constraints. The full specification of the connectivity constraints requires substantial computational resources and is impractical except for very small instances. We show that, paired with a branch-and-cut (B&C) strategy, the second model tends to outperform the first computationally. Since the optimization problem remains computationally demanding, we derive network insights, from which cutting planes that enable a reduction in the solution space can be generated. The cuts are tested on road networks with up to 500 nodes and 687 edges, leading to speed up in computational time up to almost 32x relative to the computational time of solving the second optimization model exactly with B&C but without the cuts.

08:52
Bilevel models for location-based pricing problems in online retail
PRESENTER: Svetlana Riabova

ABSTRACT. Online retail provides customers with the convenience of home delivery. That freedom, however, often forces the retailer to implement complex and potentially expensive routing strategies for product distribution. A recent practice for reducing the routing costs is to introduce common pick-up locations alternatives and incentivize the customers to choose them via segmented pricing. We propose a bilevel optimization framework to identify optimal pricing policies while considering the interaction between both the retailer and customers' decision spaces. Our models consider the pricing and routing policies at the upper level and the discrete choice of buying options for the customers at the lower level. Under some mild assumptions, we show that the customer problems have a special structure that allows for single-level reformulations.

10:30-11:30 Session Plenary 4: Modeling and Duality in Domain Specific Languages for Mathematical Optimization
Location: Gold Ballroom
10:30
Modeling and Duality in Domain Specific Languages for Mathematical Optimization

ABSTRACT. Domain specific languages (DSL) for mathematical optimization allow users to write problems in a natural algebraic format. However, what is considered natural can vary from user to user. For instance, JuMP’s interface makes a distinction between conic formulations and nonlinear programming formulations whose constraints are level-sets of nonlinear functions. Tradeoffs between such alternative modeling formats are further amplified when dual solutions are considered. In this talk we describe work related to these tradeoffs in JuMP and OR-Tools. In particular, we consider modeling using a wide range of non-symmetric cones (and their solution with Hypatia.jl) and defining precise dual contracts for two-sided constraints (and other non-conic convex constraints).

13:00-14:30 Session TB1: Large-Scale Linear Programs and Applications
13:00
Column-Randomized Linear Programs: Performance Guarantees and Applications
PRESENTER: Yi-Chun Chen

ABSTRACT. We propose a randomized method for solving linear programs with numerous columns and relatively few constraints. Since enumerating all the columns is usually unrealistic, such linear programs are commonly solved by column generation, which is often still computationally challenging due to the intractability of the subproblem in many applications. Instead of iteratively introducing one column at a time as in column generation, our proposed method involves sampling a collection of columns according to a user-specified randomization scheme and solving the linear program consisting of the sampled columns. We derive an upper bound on the optimality gap that holds with high probability and converges with rate $1 / \sqrt{K}$, where $K$ is the number of sampled columns, to the value of a linear program related to the sampling distribution. We further apply the proposed method to various applications, such as linear programs with totally unimodular constraints, Markov decision processes, covering problems and packing problems, and derive problem-specific performance guarantees. We also generalize the method to the case that the sampled columns may not be statistically independent. Finally, we numerically demonstrate the effectiveness of the proposed method in two applications.

13:22
Self-guided Approximate Linear Programs
PRESENTER: Parshan Pakiman

ABSTRACT. Approximate linear programs (ALPs) are well-known models based on value function approximations (VFAs) to obtain policies and bounds on the optimal cost of Markov decision processes (MDPs). Formulating an ALP requires (i) basis functions, the linear combination of which defines the VFA, and (ii) a state-relevance distribution, which determines the relative importance of different states in the ALP objective to minimize VFA error. Both choices are typically heuristic: basis function selection relies on domain knowledge while the state-relevance distribution is specified using the frequency of states visited by a heuristic policy. We propose a self-guided sequence of ALPs that embeds random basis functions obtained via inexpensive sampling and uses the known VFA from the previous iteration to guide VFA computation in the current iteration. Self-guided ALPs mitigate basis function selection and the impact of state-relevance distribution, thus significantly reducing the ALP implementation burden. We establish high probability error bounds on the VFAs from this sequence and show that a worst-case measure of policy performance is improved. We numerically show that self-guided ALP policies improve upon problem-specific policies on perishable inventory control and options pricing applications. More broadly, our research takes a meaningful step toward application-agnostic policies and bounds for MDPs.

13:44
Algorithms for Optimal Genetic Testing of Families
PRESENTER: Kanix Wang

ABSTRACT. Through the laws of inheritance, knowing an individual's genetic status informs disease risk for family members, but current protocols for deciding who to genetically test only consider one person at a time rather than design an optimal testing plan for the entire family. We develop a Markov decision process framework for maximizing the net benefits of genetic testing, which integrates a Bayesian network of genetic statuses, with a functional representation of cost-effectiveness. To solve this dynamic program at scale, e.g., in large families or for multigene panel testing, we approximate it with an approximate linear program. The key step is to approximate the value of a family member to remain untested versus be tested with a specific result. We show that testing is not optimal when the value differences are higher than the immediate net reward. To solve this ALP, we further develop a rollout algorithm for a price-directed policy and an adaptive algorithm to solve for the prices.

14:06
Locating Charging Stations for Battery Electric Buses: a Data-Driven Optimization Approach
PRESENTER: Haoming Shen

ABSTRACT. Many transit agencies are adopting battery electric buses (BEB) due to their benefits in reducing emissions and improving fuel economy. Nonetheless, the limited driving range, long charging time, as well as the stochastic traffic conditions and passenger demands pose challenges to designing a BEB transit system. This study proposes a data-driven optimization approach to locating BEB charging stations in a transit system, with an objective of protecting BEB from running out of power when the random battery energy consumption and the available charging time follow an ambiguous probability distribution. In particular, we consider a distributionally robust chance constrained model (DRCC) that admits a tractable second-order conic representation. Additionally, we study two decomposition algorithms, including delayed constraint generation and dual decomposition, which demonstrate significant speed-ups as compared to commercial solvers. Finally, we conduct a real-world case study based on the Salt Lake City bus transit system to evaluate the effectiveness of our approach.

13:00-14:30 Session TB2: Nonlinear Constrained Optimization
Location: Gold Ballroom
13:00
A Globally Convergent Distributed Jacobi Scheme for Block-Structured Nonconvex Constrained Optimization Problems

ABSTRACT. Motivated by the increasing availability of high-performance parallel computing, we design a distributed parallel algorithm for linearly-coupled block-structured nonconvex constrained optimization problems. Our algorithm performs Jacobi-type proximal updates of the augmented Lagrangian function, requiring only local solutions of separable block nonlinear programming (NLP) problems. We provide a cheap and explicitly computable Lyapunov function that allows us to establish global and local sublinear convergence of our algorithm, its iteration complexity, as well as simple, practical and theoretically convergent rules for automatically tuning its parameters. This in contrast to existing algorithms for nonconvex constrained optimization based on the alternating direction method of multipliers that rely on at least one of the following: Gauss-Seidel or sequential updates, global solutions of NLP problems, non-computable Lyapunov functions, and hand-tuning of parameters. Numerical experiments showcase its advantages for large-scale problems, including the multi-period optimization of a 9000-bus AC optimal power flow test case over 168 time periods, solved on the Summit supercomputer using an open-source Julia code.

13:22
Some Strongly Polynomially Solvable Convex Quadratic Programs with Bounded Variables
PRESENTER: Shaoning Han

ABSTRACT. This paper begins with a class of convex quadratic programs (QPs) with bounded variables solvable by the parametric principal pivoting algorithm with $O(n^3)$ strongly polynomial complexity, where n is the number of variables of the problem. Extension of the Hessian class is also discussed. Our research is motivated by a preprint wherein the efficient solution of a quadratic program with a tridiagonal Hessian matrix in the quadratic objective is needed for the construction of a polynomial-time algorithm for solving an associated sparse variable selection problem. With the tridiagonal structure, the complexity of the QP algorithm reduces to $O(n^2)$. Our strongly polynomiality results extend previous works of some strongly polynomially solvable linear complementarity problems with a P-matrix; special cases of the extended results include weakly quasi-diagonally dominant problems in addition to the tridiagonal ones.

13:44
Optimal Algorithms for Differentially Private Monotone Stochastic Variational Inequalities
PRESENTER: Digvijay Boob

ABSTRACT. In this work, we conduct the first systematic study of monotone stochastic variational inequality (SVI) problems under the constraint of differential privacy-(DP). We show two stochastic algorithms, NSEG and NISPP, whose sampling with replacement variants attain the optimal risk for DP-SVI. Key to our analysis is the investigation of algorithmic stability bounds, both of which are new even in the nonprivate case, together with a novel “stability implies generalization” result for the gap functions for SVI. The dependence of the running time of these algorithms, with respect to the dataset size n, is n^2 for NSEG and O(n^1.5) for NISPP.

14:06
On the analytic reparametrization of the central path of semidefinite optimization

ABSTRACT. It is well-known that the central path of semidefinite optimization, unlike linear optimization, may have no analytic extension to $\mu = 0$ in the absence of the strict complementarity condition. In this paper, we investigate the analyticity of the central path through the lens of real algebraic geometry and the theory of complex algebraic curves. We present an exact algorithm for the analytic reparametrization of the central path and then provide an upper bound on a so-called degree of analyticity. (Joint work with Saugata Basu)

13:00-14:30 Session TB3: Optimization for Learning and Decision Making I
Location: Card Room
13:00
Sparse Multivariate Mixture of Experts for analyzing Marine Microbial Systems
PRESENTER: Sangwon Hyun

ABSTRACT. Although microscopic, phytoplankton in the ocean are extremely important to all of life and are together responsible for as much photosynthesis as all plants on land combined. Today, oceanographers are able to collect flow cytometry data in real time while onboard a moving ship, providing them with fine-scale information about the distribution of phytoplankton across thousands of kilometers. We use a novel sparse multivariate mixture of experts model to estimate the time-varying phytoplankton subpopulations while simultaneously identifying the specific environmental covariates that are predictive of the observed changes to these subpopulations. In order to estimate this model efficiently, we devised a fast custom solver which utilizes three interesting components -- an Expectation-Maximization (EM) algorithm, an Alternating Direction Method of Multipliers (ADMM) algorithm, and a barebones Bartels-Stewart algorithm. Our carefully devised implementation enables fast model estimation and tuning that is necessary for large ocean flow cytometry datasets.

13:22
Optimal Data-Driven Optimization with Noisy Data

ABSTRACT. Classical Kullback-Leibler or entropic distances have recently been shown to enjoy certain optimal statistical properties in the context of decision-making with noiseless data. However, a major criticism of such distances is that may result in fragile decisions in the presence of a distributional shift between the training and out-of-sample data. Instead, we study here data-driven prediction problems with data which is corrupted by noise. We derive optimal data-driven formulations in this noisy regime and indicate that they enjoy an entropic optimal transport interpretation. Finally, we show that these optimal robust formulations formulations are tractable in several interesting settings.

13:44
Distributionally Robust End-to-End Portfolio Construction
PRESENTER: Giorgio Costa

ABSTRACT. We propose an end-to-end distributionally robust system for portfolio construction that integrates the asset return prediction model with a portfolio optimization model. Furthermore, we also show how to learn the risk-tolerance parameter and the degree of robustness directly from data. End-to-end systems have an advantage in that information can be communicated between the prediction and decision layers during training, allowing the parameters to be trained for the final task rather than solely for predictive performance. However, existing end-to-end systems are not able to quantify and correct for the impact of model risk on the decision layer. We propose a distributionally robust end-to-end portfolio selection system that explicitly accounts for the impact of model risk. The decision layer chooses portfolios by solving a minimax problem where the distribution of the asset returns is assumed to belong to an ambiguity set centered around a nominal distribution. Using convex duality, we recast the minimax problem in a form that allows for efficient training of the end-to-end system.

13:00-14:30 Session TB4: Heuristic and Iterative Approaches
Chair:
13:00
Multi-Item Online Order Fulfillment in a Two-Layer Network
PRESENTER: Linwei Xin

ABSTRACT. In recent years, the boom of e-commerce around the globe has expedited the expansion of fulfillment infrastructures by e-retailers. While e-retailers are building more warehouses to offer faster delivery services than ever, the associated fulfillment costs have skyrocketed over the past decade. In this paper, we study the problem of minimizing fulfillment costs, in which an e-retailer must decide in real-time which warehouse(s) will fulfill each order, subject to warehouses' inventory constraints. The e-retailer can split an order at an additional cost and fulfill it from different warehouses.

We focus on an RDC-FDC distribution network that major e-retailers have implemented in practice. In such a network, the upper layer contains larger regional distribution centers and the lower layer contains smaller front distribution centers. We analyze the performance of a simple myopic policy and provide theoretical bounds on the performance ratio compared with an optimal clairvoyant policy. We empirically estimate our upper bound on the ratio by using FedEx shipping rates and demonstrate the bound can be as low as 1.13 for reasonable scenarios in practice. We also extend our study to the setting in which demand forecasting is available and prove the asymptotic optimality of a linear program rounding policy.

13:22
A Heuristic Approach for a Districting Application on On-time Last-mile Express Delivery Services
PRESENTER: Roger Z. Ríos

ABSTRACT. In this talk, we present a heuristic approach for a real-world districting problem arising on express delivery. We address a last-mile logistic design problem faced by an express delivery firm in Chile. Aside from the particular planning requirements, what makes this problem different when compared to existing districting models is that we aim at maximizing the number of on-time deliveries. The proposed heuristic has a construction phase where individual districts are built, a routing phase that finds a customer routing for each district, and a solution improvement phase through local search. The proposed approach is tested on real-world instances. The results indicate significant improvements in terms of the percentage of on-time deliveries achieved by the proposed solution when compared to the current districting design of the company.

13:00-14:30 Session TB5: Energy and Agriculture
Chair:
Location: Furman Room
13:00
Enhancing the Small-Signal Stability of Islanded Microgrids under Droop Control
PRESENTER: Mikhail Bragin

ABSTRACT. Microgrids have been recognized as a promising archetype to promote the applications of distributed energy resources (DERs), e.g., photovoltaic and wind power, for enhancing the resilient operations of low- or medium-voltage distribution networks. A microgrid consists of various DERs, power-electronic interfaces like inverter, controllable loads, and monitoring and protection devices, etc. One important feature of microgrid is that it can operate in either grid-connected or islanded mode, where the islanded operation highly improves the power supply reliability. Considering the droop coefficients of DER inverter controllers play a primary role in the dynamic operations of islanded microgrids, droop coefficients are optimized to improve the small-signal stability of the system subject to disturbances from DERs or power loads. A novel problem is formulated with the objective function minimizing real eigenvalue components to stabilize the system, and the resulting problem is solved via stochastic optimization; meanwhile, the robustness of the optimized droop coefficients in various operational scenarios is improved. Numerical results on a typical microgrid system validate the effectiveness of the presented method in enhancing the small-signal stability of the islanded system. It offers an insight to improve the real-time operations of microgrids via predicting the system's dynamics and optimizing critical parameters online.

13:22
An Exact Method for Maintenance Planning Optimization in Nuclear Power Plants
PRESENTER: Ethan Deakins

ABSTRACT. Maintenance is an essential element for Nuclear Power Plants (NPP)s to retain system functionality, operational permits, and a safe work and surrounding environment. Operation and Maintenance costs contribute 60-70% of the overall cost of NPP production. Therefore, optimal cost maintenance planning is a very important component of NPP operations and production. Modeling maintenance planning in NPPs is difficult due to many reliability restrictions and system specifications. Much of the research in this area has focused on maintenance planning optimization in NPPs using non-exact methods such as genetic algorithms, simulated annealing, heuristics, etc. These methods produce solutions in a computationally efficient manner typically at the cost of optimality. This research presents an Integer Linear Programming model and solution methodology that integrates probability risk assessment and cutting plane generation.

13:44
An Open and Realistic Large-Scale Stochastic Unit Commitment Case: Data and Experimental Baseline

ABSTRACT. We describe a publicly available data set for stochastic unit commitment, centered on a synthetic-but-realistic version of the ERCOT (Texas) power system. Leveraging data produced under the DOE/ARPA-E PERFORM program, we have developed a year-long data set for stochastic unit commitment, considering uncertainty in load and renewables production. The system has approximately 500 thermal generators and a full network topology with approximately 7K buses and up to 1K scenarios. We discuss the case construction in detail, and overview computational results obtained using the open source mpi-sppy stochastic programming library and the Egret library for commitment/dispatch optimization models. The results are intended to serve as a benchmark for research on state-of-the-art stochastic unit commitment solvers.

14:06
Optimization of crop insurance policy selection and fertilizer management with water quality impacts
PRESENTER: Sarah Ryan

ABSTRACT. Crop yield, dependent on weather conditions, and the market-driven fluctuating crop prices are the major uncertainties affecting farm revenue. Because nitrogen (N) is needed to increase the yield output, farmers apply fertilizer to the soil to compensate for its nutrient deficiencies. However, N leaching and loss as nitrate through subsurface flow and agricultural drainage systems threatens water quality. In the US, federal crop insurance and income support policies defined in the 2018 Farm Bill are the primary ongoing financial instruments mitigating farmers’ financial risk. In addition to providing financial security for farmers, these programs also affect fertilizer management decisions and may have unintended consequences for water quality. We build a two-stage stochastic program, including CVaR as a risk measure, to find optimal crop insurance choices and fertilizer application rates under a range of risk preferences. In a case study of Midwest corn farming, we investigate how much financial safety the insurance programs provide to farmers and how these available programs impact the water quality. The results can inform the design of support policies and insurance products to improve both economic and environmental outcomes.

13:00-14:30 Session TB6: Recent Updates on the Adaptive-Partition Method for Stochastic Models
Location: Palmetto Room
13:00
Adaptive partition‐based SDDP algorithms for multistage stochastic linear programming
PRESENTER: Yongjia Song

ABSTRACT. In this talk, we extend the adaptive partition-based approach for solving two-stage stochastic programs with fixed recourse matrix and fixed cost vector to multi-stage stochastic programming where the stochastic process is assumed to be stage-wise independent. The proposed algorithms integrate the adaptive partition-based strategy with a popular approach for solving multistage stochastic programs, the stochastic dual dynamic programming (SDDP) algorithm, according to two main strategies on refining the partitions during the solution process. In particular, we propose a refinement outside SDDP strategy whereby we iteratively solve a coarse scenario tree induced by the partitions, and refine the partitions in a separate step outside of SDDP, only when necessary. We also propose a refinement within SDDP strategy where the partitions are refined in conjunction with the machinery of the SDDP algorithm. We then use, within the two different refinement schemes, different tree-traversal strategies which allow us to have some control over the size of the partitions. We performed numerical experiments on a hydro-thermal power generation planning problem. Numerical results show the effectiveness of the proposed algorithms that use the refinement outside SDDP strategy in comparison to the standard SDDP algorithm and algorithms that use the refinement within SDDP strategy.

13:22
Exact Benders Adaptive-cut Method for Two-Stage Stochastic Programs
PRESENTER: Eduardo Moreno

ABSTRACT. Decomposition methods, and in particular the Benders decomposition, have shown strong capabilities in the design and implementation of efficient solution algorithms for two-stage stochastic problems with a large number of scenarios. In recent years, improvements in Benders' methods have focused on the aggregation of scenarios, using different measures of distance between scenarios. On the other hand, the Adaptive Partition Method for two-stage stochastic problems has become a popular method for solving problems with a large number of scenarios, due to its ability to aggregate and disaggregate scenarios based on the information from the duals of each scenario. In this talk we discuss a generalization of the adaptive method and how to incorporate this generalization into the Benders decomposition method. The resulting algorithm can be seen as a middle ground between the so-called multi-cut and single-cut Benders methods. We present computational results on stochastic network flow problems, showing that the proposed method benefits from the advantages of both multi-cut and single-cut developments. Moreover, this technique follows the same structure of Benders decompositions, leaving open the possibility to apply the methodology to particular problems where acceleration techniques can increase the global efficiency of Benders decomposition.

13:44
Energy Storage Management Strategies under Uncertain Generation
PRESENTER: Helena García

ABSTRACT. Due to climate change concerns, many governments have pushed for higher penetration of intermittent renewable energy sources. Among these energy sources, photovoltaic (PV) generation is most sought after, particularly by domestic users and small industries. The main drawback of this energy source is its variability and not being available for the whole day. One way of canceling or diminishing this drawback is to use energy storage systems, like batteries. Although storage systems are currently expensive, their price has dropped significantly in the last decade. Many studies indicate that they will become an economically viable alternative in the coming years.

This work presents a novel approach to schedule these storage units in a PV generation system based on stochastic optimization. Using real historical data, we analyze different methods for generating scenarios for our optimization model, showing that the resulting decision support system improves current scheduling policies significantly. We further validate our results by testing the resulting schedules in an actual prototype, highlighting that the prototype has a very limited computational capacity.

15:00-16:30 Session TC1: New Directions in Stochastic Optimization: Theory and Models II
15:00
Nonnegative Tensor Completion Via Integer Optimization
PRESENTER: Caleb Bugg

ABSTRACT. Unlike matrix completion, no algorithm for the tensor completion problem has so far been shown to achieve the information-theoretic sample complexity rate. This paper develops a new algorithm for the special case of completion for nonnegative tensors. We prove that our algorithm converges in a linear (in numerical tolerance) number of oracle steps, while achieving the information-theoretic rate. Our approach is to define a new norm for nonnegative tensors using the gauge of a specific 0-1 polytope that we construct. Because the norm is defined using a 0-1 polytope, this means we can use integer linear programming to solve linear separation problems over the polytope. We combine this insight with a variant of the Frank-Wolfe algorithm to construct our numerical algorithm, and we demonstrate its effectiveness and scalability through experiments.

15:22
Stratifying Online Field Experiments Using The Pigeonhole Design
PRESENTER: Jinglong Zhao

ABSTRACT. Practitioners and academics have long appreciated the benefits that experimentation brings to firms. However, little was known in handling heterogeneity when experimental units arrive sequentially in online field experiments. In this paper, we study a novel online experimental design problem, which we refer to as the "Online Stratification Problem". In this problem, experimental units with heterogeneous covariate information arrive sequentially, and must be immediately assigned into either the control or the treatment group, with an objective of minimizing the total discrepancy between the two groups. To solve this problem, we propose a novel experimental design approach, which we refer to as the "Pigeonhole Design". The pigeonhole design partitions the covariate space into smaller spaces, which we refer to as pigeonholes, and maintains the number of control and treatment units as balanced as possible in each pigeonhole. We analyze the theoretical performance of the pigeonhole design and show the effectiveness by comparing against two well-known benchmark designs: the match-pair design, and the completely randomized design.

15:44
A General Approach to Solve Shape-Constrained Distributionally Robust Optimization via Importance-Weighted Sample Average Approximation
PRESENTER: Zhenyuan Liu

ABSTRACT. Shape-constrained distributionally robust optimization (DRO) arises in a range of problems in optimization under uncertainty and extreme event analysis. In the DRO literature, these problems are usually solved via reduction into moment-constrained problems using techniques such as the Choquet representation theory, whose tractability is determined in a case-by-case manner. In this work, we develop a general method to solve shape-constrained DRO by combining sample average approximation with importance sampling, which reduces these infinite-dimensional problems into finite-dimensional linear programs whose optimality gap vanishes and can be readily quantified. The theoretical underpinning of our approach builds on empirical process theory and reveals how shape constraints play an important role in guaranteeing desirable convergence rates.

16:06
Rockafellian Functions: Theory and Applications

ABSTRACT. Rockafellian functions are central to sensitivity analysis, optimality conditions, algorithmic developments, and duality. They encode an embedding of an actual problem of interest within a family of problems and lead to broad insight and computational possibilities. This presentation introduces Rockafellians and illustrates their use in stochastic optimization and in many applications areas such as inventory control.

15:00-16:30 Session TC2: Advances in Nonlinear Optimization
Location: Gold Ballroom
15:00
On the Convergence of Projected Alternating Maximization for Equitable and Optimal Transport
PRESENTER: Minhui Huang

ABSTRACT. This paper studies the equitable and optimal transport (EOT) problem, which has many applications such as fair division problems and optimal transport with multiple agents etc. In the discrete distributions case, the EOT problem can be formulated as a linear program (LP). Since this LP is prohibitively large for general LP solvers, Scetbon \etal \cite{scetbon2021equitable} suggests to perturb the problem by adding an entropy regularization. They proposed a projected alternating maximization algorithm (PAM) to solve the dual of the entropy regularized EOT. In this paper, we provide the first convergence analysis of PAM. A novel rounding procedure is proposed to help construct the primal solution for the original EOT problem. We also propose a variant of PAM by incorporating the extrapolation technique that can numerically improve the performance of PAM. Results in this paper may shed lights on block coordinate (gradient) descent methods for general optimization problems.

15:22
Complexity Bounds of Iterative Linearization Algorithms for Discrete Time Nonlinear Control
PRESENTER: Vincent Roulet

ABSTRACT. We revisit the nonlinear optimization approach to discrete-time nonlinear control and optimization algorithms based on iterative linearization. While widely popular in many domains, these algorithms have mainly been analyzed from an asymptotic viewpoint. We establish non-asymptotic complexity bounds and global convergence for a class of generalized Gauss-Newton algorithms relying on iterative linearization of the nonlinear control problem, henceforth calling iterative linear quadratic regulator or differential dynamic programming algorithms as subroutines. The sufficient conditions for global convergence are examined for multi-rate sampling schemes given the existence of a feedback linearization scheme. We illustrate the algorithms in synthetic experiments and provide a software library to reproduce the numerical results.

15:44
Reformulation-Perspectification Technique: General Methodology
PRESENTER: Danique de Moor

ABSTRACT. We propose an extension of RLT to nonlinear optimization problems, which we call the Reformulation-Perspectification Technique (RPT). RPT extends RLT in two ways. First, pairwise multiplication of inequalities that are not necessarily linear or quadratic are also considered and thereby RPT obtains tighter approximations than RLT based methods. Second, more types of nonconvexity can be handled by RPT. The RPT approach is able to generate convex approximations if the nonconvex components are sums of products of convex functions. This includes nonconvex quadratic functions and difference of convex functions. Numerical experiments, including on a dike height optimization problem, demonstrate the effectiveness of the RPT approach.

16:06
Nonconvex-Nonconcave Min-Max Optimization with a Small Maximization Domain

ABSTRACT. We study the problem of finding approximate first-order stationary points in optimization problems of the form

min_{x \in X} max_{y \in Y} f(x,y),

where X, Y are convex sets, Y is compact, and f is a smooth objective function assumed neither convex in x nor concave in y. Our approach relies upon replacing the function f(x,⋅) with its k-th order Taylor approximation and finding a near-stationary point in the resulting surrogate problem. To guarantee its success, we establish the following result: let the Euclidean diameter of Y be small in terms of the target accuracy ε, namely

O(ε^p) with p = 2/(\max{k,1}+1),

with the constant factor controlled by certain regularity parameters of f; then any ε-stationary point in the surrogate problem remains O(ε)-stationary for the initial problem. Moreover, we show that these upper bounds are nearly optimal: the aforementioned reduction provably fails when the diameter of Y is larger. For k <= 2, the surrogate function can be efficiently maximized in y; our general approximation result then leads to efficient algorithms for finding a near-stationary point in nonconvex-nonconcave min-max problems, for which we also provide convergence guarantees.

15:00-16:30 Session TC3: Optimization for Learning and Decision Making II
Location: Card Room
15:00
Fast Rates for Contextual Linear Optimization
PRESENTER: Yichun Hu

ABSTRACT. Incorporating side observations in decision making can reduce uncertainty and boost performance, but it also requires we tackle a potentially complex predictive relationship. While one may use off-the-shelf machine learning methods to separately learn a predictive model and plug it in, a variety of recent methods instead integrate estimation and optimization by fitting the model to directly optimize downstream decision performance. Surprisingly, in the case of contextual linear optimization, we show that the na¨ıve plug-in approach actually achieves regret convergence rates that are significantly faster than methods that directly optimize downstream decision performance. We show this by leveraging the fact that specific problem instances do not have arbitrarily bad near-dual-degeneracy. While there are other pros and cons to consider as we discuss and illustrate numerically, our results highlight a nuanced landscape for the enterprise to integrate estimation and optimization. Our results are overall positive for practice: predictive models are easy and fast to train using existing tools, simple to interpret, and, as we show, lead to decisions that perform very well.

15:22
Learning Treatment Effects in Panels with General Intervention Patterns
PRESENTER: Tianyi Peng

ABSTRACT. The problem of causal inference with panel data is a central econometric question. The following is a fundamental version of this problem: Let $M^*$ be a low-rank matrix and $E$ be a zero-mean noise matrix. For a `treatment' matrix $Z$ with entries in $\{0,1\}$ we observe the matrix $O$ with entries $O_{ij} := M^*_{ij} + E_{ij} + \mathcal{T}_{ij} Z_{ij}$ where $\mathcal{T}_{ij} $ are unknown, heterogeneous treatment effects. The problem requires we estimate the average treatment effect $\tau^* := \sum_{ij} \mathcal{T}_{ij} Z_{ij} / \sum_{ij} Z_{ij}$. The synthetic control paradigm provides an approach to estimating $\tau^*$ when $Z$ places support on a single row. This paper extends that framework to allow rate-optimal recovery of $\tau^*$ for general $Z$, thus broadly expanding its applicability. Our guarantees are the first of their type in this general setting. Computational experiments on synthetic and real-world data show a substantial advantage over competing estimators.

15:44
bsnsing: A decision tree induction method based on recursive optimal boolean rule composition

ABSTRACT. I present a mixed-integer programming (MIP) formulation for optimizing split rule selection in the decision tree induction process, as well as an implicit enumeration algorithm that is able to solve practical instances of the MIP model faster than commercial solvers. The formulation is novel for it directly maximizes the Gini reduction, an effective split selection criterion which has never been modeled in a mathematical program for its nonconvexity. The proposed approach differs from other optimal classification tree models in that it does not attempt to optimize the whole tree, therefore the flexibility of the recursive partitioning scheme is retained and the optimization model is more amenable. The approach is implemented in an open-source R package named bsnsing. Benchmarking experiments on 75 open data sets suggest that bsnsing trees are the most capable of discriminating new cases compared to trees trained by other decision tree codes including the rpart, C50, party and tree packages in R. Compared to other optimal decision tree packages, including DL8.5, OSDT and indirectly more, bsnsing stands out in its training speed, ease of use and broader applicability without losing in prediction accuracy.

15:00-16:30 Session TC4: Mixed Integer Optimization and Applications
15:00
An Algorithm for Biobjective Mixed Integer Quadratic Programs

ABSTRACT. We develop a branch and bound (BB) algorithm to solve the biobjective mixed integer quadratic program (BOMIQP), which is a global optimization problem that has not been solved before. The algorithm computes the efficient set and the complete Pareto set in parametric form. The algorithm consists of five components: a new practical fathoming module to fathom nodes of the BB tree, a new set domination procedure to filter the dominated points and update the current nondominated set of the BOMIQP, a node problem solver to solve the linear programming relaxation of the BOMIQP at each node of the BB tree, an extended branching module which is integrated with the node problem solver to perform branching, and initial efficient and Pareto sets to support the fathoming and set dominance modules. Computational results on synthetic instances are provided.

15:22
Characterizing and Benchmarking QUBO Reformulations of the Knapsack Problem

ABSTRACT. It has been shown that quantum computers can outperform classical computers in solving some instances of NP-hard problems, for instance, the Graph partitioning problem. To do this, a Quadratic Unconstrained Binary Optimization (QUBO) formulation is needed, but most of the literature regarding QUBO reformulations for constrained optimization problems is centered around equality constrained problems. Here, we focus on the ``simplest'' inequality constrained problem: the knapsack problem (KP). Specifically, we derive different QUBO formulations for the KP, characterize the range of their associated penalty constants, as well as computationally benchmark them through experiments using the quantum approximate optimization algorithm (QAOA) on a gate-based quantum computer. As a byproduct, we correct some erroneous results regarding QUBO reformulations for the KP reported in the literature.

15:44
An LP-based characterization of solvable QAP instances with chess-board and graded structures
PRESENTER: Lucas Waddell

ABSTRACT. The quadratic assignment problem (QAP) is perhaps the most widely studied nonlinear combinatorial optimization problem. It has a variety of applications, yet has proven to be extremely challenging to solve. This difficulty has motivated researchers to identify special objective function structures that permit an optimal solution to be found efficiently. Recent work has shown that certain such structures can be explained in terms of the continuous relaxation of a mixed 0-1 linear reformulation of the problem known as the level-1 reformulation-linearization-technique (RLT) form. Specifically, the objective function structures were shown to guarantee that a binary optimal solution exists to the continuous relaxation. We extend that work by considering other known solvable cases where the objective function coefficients have special chess-board and graded structures, and also characterizing them in terms of the level-1 RLT form. As part of this characterization, we develop a new relaxed version of the level-1 RLT form, the structure of which can be readily exploited to study the special cases under consideration.

16:06
Integer Programming Methods to Identify Nash Equilibrium Solutions for Platform Based Scheduling Games
PRESENTER: Lucky Cho

ABSTRACT. This paper proposes an integer programming approach examining Nash Equilibrium arising for a game modeling freelancer platforms. This Platform Based Scheduling Game is played by the clients who choose a single freelancer and the freelancers create a schedule of work based on the clients who choose them. In order to identify a Nash equilibrium schedule of the game, integer programming is utilized. The integer programming constraints form a one-to-one correspondence between a Nash equilibrium of the game and a feasible solution to the integer program. The one-to-one correspondence allows the integer program to find the optimal Nash equilibrium schedules for objective that considers of the platform, freelancers, and clients. It allows us to calculate the price of anarchy and price of stability, and evaluate the loss in objective value for one stakeholder of the game when the game is optimized for another stakeholder. We show that the decentralized matching performs well for the clients and the freelancers compared to the centralized optimal matching but the platform can suffer from allowing clients autonomy. The optimal schedules for the clients and freelancers can also hurt the platform's objective and the enforcement of Nash equilibrium constraints does not guarantee equity in participants' satisfaction.

15:00-16:30 Session TC5: Application and Computation
Location: Furman Room
15:00
Using Multicriteria Optimization and Simulated Annealing to Construct Political Districts
PRESENTER: Blake Splitter

ABSTRACT. Political redistricting is a complicated, multi-faceted process that requires consideration of many different metrics, including district population, compactness, conservation of political boundaries, and voting fairness. To account for these competing metrics, we utilize multicriteria optimization and simulated annealing to build maps with better political districts automatically with computers. In our efforts to improve maps with respect to multiple objectives, we apply district recombination, where a neighbor to the current map state is any map that redistributes the geographic units from any two adjacent districts into two new viable districts. By using multicriteria techniques to gradually improve many maps simultaneously, we can produce many high-quality maps, which can then be candidates for implementation.

15:22
Application of Decomposition and Coordination Techniques on Multiobjective Engineering Design Problems
PRESENTER: Philip de Castro

ABSTRACT. We present a discussion on the implementation of a decomposition method on a multiobjective optimization problem in the context of engineering design. This decomposition of a large All-in-One (AiO) problem into a set of multiobjective subproblems, each with a subset of the original objective functions, makes calculation of Pareto efficient solutions not only easier computationally, but also aids in the necessary tradespace analysis. However, in order to ensure that the efficient solutions of the subproblems can be constructed into (at least weakly) efficient solutions to the AiO, and that the efficient solutions in one subproblem perform ``well enough'' in the other subproblems to satisfy decision makers' preferences, a coordination process is necessary. The utility of a decomposition and coordination method is demonstrated on a real-world example of the design of an autonomous military vehicle. We develop an optimization model describing the design decisions as well as a decomposition and a coordination method that utilizes the structure of the proposed model. We then work through an example of the decision making process to observe the benefits of this technique to the decision makers. (DISTRIBUTION A. Approved for public release; distribution unlimited. OPSEC#6024)

15:44
MOSDEX: Proposal for A New Standard for Data Exchange with Optimization Solvers
PRESENTER: Matthew Saltzman

ABSTRACT. This paper proposes a new standard called MOSDEX (Mathematical Optimization Solver Data EXchange) to significantly enhance the venerable MPS standard.

MOSDEX is based on several principles: independence from and support for multiple optimization solvers and their APIs and for multiple algebraic modeling languages, model-data separation, relational data modeling, and incorporation of standard optimization modeling objects. MOSDEX uses the widely adopted JSON data format standard to take advantage of JSON support in various programming languages, including Java, C++, Python, and Julia. The presentation demonstrates the principles of MOSDEX through examples taken from well-known optimization problems.

16:06
Getting to the Scenes Faster: Facilitating the Movement of Multiple Emergency Response Vehicles in Two-way Roadways

ABSTRACT. Emergency response vehicles (ERVs) should arrive at incident locations and transport patients to hospitals quickly. While non-ERV drivers attempt to move out of the ERV’s way, they may be unsure about the path of ERV and cause conflicts. An emergency response system is proposed to facilitate the movement of multiple ERVs in a two-way transportation link using connected vehicle technology. The model addresses two goals: (1) minimizing ERVs’ travel time and (2) maximizing safety. The system identifies optimal locations at which non-ERVs could stop safely to facilitate the passage of the ERV. The model permits ERVs to use a travel lane from the opposite direction (contraflow) based on the response importance. Due to the complexity of the Integer Linear Programming (ILP) model, a hybrid Non-Dominated Sorting Genetic II Algorithm-Particle Swarm Optimization (NSGAII-PSO) approach has been used to solve the model faster. Different scenarios, including numbers and types of ERVs, road types, levels of congestion, and direction priority, have been tested. Results indicate that the proposed system outperforms the current practice (moving to the nearest edges) in travel time in some scenarios due to contraflow movement and in safety in all scenarios since passing and weaving among non-ERVs are prohibited.

15:00-16:30 Session TC6: Emerging Applications in Optimization under Uncertainty
Location: Palmetto Room
15:00
Non-Adaptive Stochastic Score Classification and Explainable Halfspace Evaluation
PRESENTER: Rohan Ghuge

ABSTRACT. We consider the stochastic score classification problem. There are several binary tests, where each test $i$ is associated with a probability $p_i$ of being positive and a cost $c_i$. The score of an outcome is a weighted sum of all positive tests, and the range of possible scores is partitioned into intervals corresponding to different classes. The goal is to perform tests sequentially (and possibly adaptively) so as to identify the class at the minimum expected cost. We provide the first constant-factor approximation algorithm for this problem, which improves over the previously-known logarithmic approximation ratio. Moreover, our algorithm is {\em non adaptive}: it just involves performing tests in a {\em fixed} order until the class is identified.

Our approach also extends to the $d$-dimensional score classification problem and the "explainable" stochastic halfspace evaluation problem (where we want to evaluate some function on $d$ halfspaces). We obtain an $O(d^2\log d)$-approximation algorithm for both these extensions.

Finally, we perform computational experiments that demonstrate the practical performance of our algorithm for score classification. We observe that, for most instances, the cost of our algorithm is within $50\%$ of an information-theoretic lower bound on the optimal value.

15:22
Evaluation of Waste Disposal Alternatives in Healthcare Systems

ABSTRACT. To improve healthcare waste management, it is critical to determine proper treatment technologies for waste disposal. Four potential treatment technologies (alternatives) being utilized to dispose healthcare wastes include a) incineration, b) steam sterilization, c) microwaving, and d) landfilling. This paper focuses on developing a new multi-criteria decision-making technique based on the integration of the intuitionistic fuzzy set theory and the VIKOR method to assess and rank healthcare waste disposal technologies more efficiently. The computational procedure of the proposed methodology is demonstrated through a case study of a hospital located in Ohio, U.S. The proposed approach estimates the GHG (greenhouse gas) reductions and potential economic benefits derived from increased recycling for the case study. In addition, analysis of results shows that steam sterilization and microwave technologies are the best alternatives for disposing of healthcare wastes as they emit fewer pollutants and generate non-hazardous residues.

15:44
Stochastic Job-Shop Scheduling: Implications on Supply-Chain Operations under Uncertainty
PRESENTER: Mikhail Bragin

ABSTRACT. Motivated by the presence of uncertainties as well as combinatorial complexity within the links of supply chains, this talk addresses the outstanding and timely challenge of stochastic job-shop scheduling (JSS) problems arising within low-volume high-variety manufacturing. These problems have been classically formulated as integer linear programs, which are known to be NP-hard. Yet, optimal or near-optimal solutions must be obtained within strict computational time requirements. While the resulting deterministic cases have been solved efficiently by state-of-the-art methods such as branch-and-cut (B&C), uncertainties may compromise the entire schedule, thereby potentially affecting the entire supply chain downstream. Thus, uncertainties must be explicitly captured to ensure the feasibility of schedules at the operating stage. The resulting stochastic nature of the resulting problem adds a layer of computational difficulty on top of an already intractable problem, as evidenced by the presented case studies with some cases taking days or even weeks without being able to find a “near-optimal” schedule. To efficiently solve the resulting stochastic JSS problem, Surrogate Lagrangian Relaxation is used to reduce computational effort while guaranteeing convergence. Computational results based on 20-job and 127-job instances demonstrate that the new method is two orders of magnitude faster as compared to B&C.

16:06
Capacity Expansion for Deferred Surgeries via Robust Optimization
PRESENTER: Eojin Han

ABSTRACT. The COVID-19 pandemic has necessitated widespread deferrals of elective surgeries. These delays not only increase the cost from potential deterioration of patients’ conditions, but they also decrease the revenue due to patients departing. Current policies are ad-hoc in nature and either defer all surgeries or expand capacities by pre-determined factors. However, the presence of uncertainty in surgery demand and patient departure and its endogeneity to hospitals’ operations, render existing capacity management approaches inapplicable. To this end, we develop a robust optimization framework for this surgical capacity management problem, where the uncertain parameters and their endogenous nature are modeled via multilinear functions of uncertainties. To address this nonlinear structure, we propose the concept of tree of uncertainty products and leverage it to provide tractable approximations for the overall problem, which significantly extends the current literature. Implementation of our approach with multi-year claim data of more than 15,000 patients in the United States for hernia patients, who have experienced significant delays due to the pandemic, demonstrates sizable improvements over alternative methods and offers multiple practical insights.