MOTOR 2022: MATHEMATICAL OPTIMIZATION THEORY AND OPERATIONS RESEARCH
PROGRAM FOR SUNDAY, JULY 3RD
Days:
next day
all days

View: session overviewtalk overview

10:00-12:00 Session 2A: Game Theory
10:00
Stability of Persons' Pareto Behavior with respect to Different Mechanisms in Resource Allocation Game Experiments

ABSTRACT. In the present paper we analyze data obtained in experiments of resource allocation games with two mechanisms: the Yang-Hajek's mechanism (YH) and mechanism based on the alternating direction method of multipliers (ADMM). Previous research has shown that there is some correlation between consensus achievement and share of certain types of behavior. This paper considers behavior models based on best response and Pareto improvement. The aim of the research was to investigate individuals' behavior variation on mechanism change. A behavior based on Pareto improvement is demonstrated by players in both mechanisms slightly more often than only in one mechanism. At the same time, more players demonstrate behavior based on best response in either YH or in ADMM than in both mechanisms. Players had an opportunity to subjectively rank mechanisms; dependence between players behavior and subjective ranking is also studied. It is possible that players give higher rank to the mechanism where they behave according to the best-response and Pareto improvement models more often. There can be inverse relationship between the ranking of YH mechanism and the share of Pareto improvement behavior.

10:15
The gap between cooperative and non-cooperative solutions for the public-private partnership model

ABSTRACT. We study a new variant of the bi-level linear integer programming model for the strategic planning of the public-private partnership. In this model, we need to select and schedule some ecological, infrastructure, and production projects within a finite planning horizon. Two players, the leader (government) and the follower (private companies) make their own decisions sequentially under the weak budget constraints. We assume that an unused budget can be used later. The players try to maximize their own total profits. We show that this new bi-level problem is $\sum^p_2$-hard in cooperative and non-cooperative cases. Moreover, we claim that the absolute and relative gaps between the optimal cooperative and non-cooperative solutions can be arbitrarily large. Computational results for real-world instances for the Transbaikalian polymetallic fields and the gap between the cooperative and non-cooperative solutions are discussed.

10:35
A cooperation scheme in multistage game of renewable resource extraction with asymmetric players

ABSTRACT. We derive a non-cooperative and cooperative strategies and state trajectories for a finite-horizon multistage game of renewable resource extraction with asymmetric players. Assuming transferable utility we extend the subgame perfect core concept introduced for extensive-form games to the class of $n$-person multistage games and specify an algorithm for choosing a unique payoff distribution procedure from the core in a two-player game. This quasi proportional payment schedule satisfies several good properties and could be applied to implement a cooperative solution based on the maximization of the relative benefit from cooperation (or the value of cooperation). We provide a numerical example to demonstrate the properties of the obtained solutions and the algorithm implementation.

10:50
Optimal strategies of consumers with energy storages in electricity market.

ABSTRACT. In the present work, we consider a problem of energy storages optimal control by consumers buying energy at the wholesale markets or under fixed tariff rates. Since the maximal prices are usually several times less than the minimal prices, the efficient control of the storage may provide a valuable profit. We determine the optimal consumption schedule and the optimal storage control with account of hour specific consumption as well as a moving load, which can be redistributed throughout the day. The model takes into account random factors, including the energy production by renewable energy sources and the demand depending on the weather. First, we study the case where the storage control is based on the reliable forecast of the random factors for the planning interval, and then examine the corresponding stochastic optimization problem, where future values of random factors are characterized by a probability distribution. We determine the optimal strategy for the markets where a consumer can sell the excess energy back to the market at the same price, and also consider the problem where such sail is impossible.

11:10
Gaussian One-Armed Bandit with Both Unknown Parameters

ABSTRACT. We consider the one-armed bandit problem as applied to data processing. We assume that there are two alternative processing methods and efficiency of the second method is a priory unknown. During control process, one has to determine if the second method is more efficient than the first one and to provide a primary application of the most efficient method. The essential feature of considered approach is that the data is processed by batches and cumulative incomes in batches are used for the control. If the sizes of batches are large enough then according to the central limit theorem incomes in batches are approximately Gaussian. Also if the sizes of batches are large, one can estimate the variances of incomes during the processing initial batches and then use these estimates for the control. However, for batches of moderate sizes it is reasonable to estimate unknown variances throughout the control process. This optimization problem is described by Gaussian one-armed bandit with both unknown parameters. Given a prior distribution of unknown parameters of the second action, we derive a recursive Bellman-type equation for determining corresponding Bayesian strategy and Bayesian risk. Minimax strategy and minimax risk are searched for according to the main theorem of the game theory as Bayesian ones corresponding to the worst-case prior distribution.

11:30
Hyper-heuristic for the parallel machine scheduling with transport robots

ABSTRACT. We consider a scheduling problem originating from order picking at an automatic warehouse. We have a finite set of orders. Each order consists of a set of items. All orders should be processed through the picking stage, where items are extracted from pallets and packed in boxes for customers. We have some identical parallel picking machines and some parking slots for pallets. Each machine can use the pallets from all parking slots. Each pallet should be brought from the automatic warehouse to the parking slot and returned by a transport robot. The fleet of robots is limited. Our goal is to process all orders with minimal makespan. We use hyper-heuristic approach to tackle the problem. Hyper-heuristic is a high-level algorithm that selects and combines several low-level heuristics to adapt to particular problem. We design populational hyper-heuristic for this problem. Computational experiments for semi-synthetic test instances with up to 345 orders, 121 types of items, 30 transport robots, and 16 identical parallel machines are discussed.

11:45
Consistent prize allocation in competitions

ABSTRACT. Given the final ranking of a competition, how should the total prize endowment be allocated among the competitors? We study consistent prize allocation rules satisfying elementary solidarity and fairness principles. In particular, we axiomatically characterize several families of rules satisfying anonymity, order preservation, and endowment monotonicity, which all fall between the Equal Division rule and the Winner-Takes-All rule. Specific characterizations of rules and subfamilies are directly obtained. To our knowledge, this paper is the first that develops and motivates prize allocation rules for rank-order competitions directly from axioms

10:00-12:00 Session 2B: Integer Programming and Combinatorial Optimization
10:00
A knapsack problem for rectangles under a gravity center constraint

ABSTRACT. We consider a two-dimensional knapsack problem for rectangular items with a gravity center constraint. We have a set of rectangles with pre-defined widths, lengths, and masses and a knapsack with known width and length. Our goal is to select a subset of items and find their packing into the knapsack without overlapping so to minimize an empty space in the knapsack and the total gravity center of such items deviates from the geometric center of the knapsack at most a constant in both axes. We use the item permutations to represent the solutions and the skyline heuristic as a decoding procedure. The gravity center constraint is relaxed and included into the objective function with a penalty. To find the best permutation, we apply the simulated annealing algorithm with swap neighborhood and random starting permutation. Computational results for different classes of test instances are discussed.

10:20
Multiple project scheduling for a network roll-out problem: MIP formulation and heuristic

ABSTRACT. The paper addresses the Network Roll-Out (NRO) problem aimed at scheduling the construction of mobile stations. In this case, NRO problem can be considered as a generalization of Resource-Con\-strain\-ed Project Scheduling Problem, where we need to find a schedule for the activities related to the construction of a set of base stations taking into account the precedence constraints and the availability of resources, while minimizing some measure of performance. The generalization involves the multiple projects, multi-modes, discrete time-cost tradeoff, together with particular business requirements like precedence redundancy and workload stability constraints.

To solve this problem, we propose a MIP formulation which is based on a generalization of Disaggregated Discrete-Time formulation with pulse start variables. Using this formulation, we propose a three stage heuristic which is based on a relax and fix strategy. The effectiveness and efficiency of the proposed approach is illustrated in a series of computational experiments on real-life problem instances.

10:40
On 1-skeleton of the polytope of pyramidal tours with step-backs

ABSTRACT. Pyramidal tours with step-backs are Hamiltonian tours of a special kind: the salesperson starts in city 1, then visits some cities in ascending order, reaches city $n$, and returns to city 1 visiting the remaining cities in descending order. However, in the ascending and descending direction, the order of neighboring cities can be inverted (a step-back). It is known that on pyramidal tours with step-backs the traveling salesperson problem can be solved by dynamic programming in polynomial time.

We define the polytope of pyramidal tours with step-backs $\operatorname{PSB}(n)$ as the convex hull of the characteristic vectors of all possible pyramidal tours with step-backs in a complete directed graph. The 1-skeleton of $\operatorname{PSB}(n)$ is the graph whose vertex set is the vertex set of the polytope, and the edge set is the set of geometric edges or one-dimensional faces of the polytope. We present a linear-time algorithm to verify vertex adjacencies in 1-skeleton of the polytope $\operatorname{PSB}(n)$ and estimate the diameter and the clique number of 1-skeleton: the diameter is bounded above by 4 and the clique number grows quadratically in the parameter $n$.

10:55
Cascade Merge of Aircraft Flows as Mixed Integer Linear Programming Problem

ABSTRACT. The problem of creating a non-conflict aircraft queue from several incoming flows is considered in the situation of a cascade merge. In this case, groups of flows merge at their joining points, and further the resulting flows merge at some other points. Such a consideration covers, for example, an entire scheme of incoming aircraft flows of an airport zone, as well as accounting for the arriving and departing aircraft. This problem is formalized in the framework of mixed integer linear programming and is solved using the optimization library Gurobi. An approach is proposed to control the permutations of aircraft of the same type within one flow. A series of numerical simulations is carried out. They include both some model examples illustrating the ideas of the algorithm and statistical modeling to estimate the performance of the proposed procedure.

11:10
Control generation for aircraft to reach checkpoint at prescribed time instant

ABSTRACT. Nowadays, due to sufficient growth of air traffic density, problems of automation of the work of air traffic controllers (ATC) are very important. One of these problems is control generation for an aircraft allowing it to reach a checkpoint at a prescribed time instant. Such a problem arises, for example, at a merge point of two or more aircraft flows including merge points in airport zone at landing routes. Change of the arrival instant can be provided by alternating aircraft motion trajectory (acceleration at straightening segments or delay at holding areas) or by alternating its velocity. A formalization of the problem in the framework of mixed integer quadratic programming (MIQP) is suggested by the authors. The criterion to be minimized is the number of the interactions between ATC and pilot. The subproblem of velocity regime is solved numerically by means of MIQP-solver provided by the Gurobi library. Each subproblem is solved quite fast during from 0.01 up to 0.1 of second. So, solution of the entire problem (which includes enumeration of trajectories and choice of the optimal velocity regime for each of them) takes one-two seconds. Thus, the problem can be solved during a time acceptable for a real time application.

11:25
On a Nonconvex Distance-Based Clustering Problem

ABSTRACT. Clustering is one of the basic data analysis tools and an important subroutine in many machine learning tasks. Probably, the most well-known and popular clustering model is the Euclidean minimum-sum-of-squares clustering problem, also known as the $k$-means problem. Clustering with Bregman divergences is a generalization of the $k$-means problem where the distances between data items and closest cluster centers are computed according to any Bregman divergence, rather than the squared Euclidean distance. In this paper, we consider a mathematical programming problem of clustering with Bregman divergences. We propose several representations of the problem in the form of a DC (difference of convex) program and develop a DC programming approach to solve it. We provide particular DC representations and particular DC solution algorithms for several widely-known Bregman divergences.

11:40
Three-Bar Charts Packing Problem

ABSTRACT. The Three-Bar Charts Packing Problem (3-BCPP) is to pack the bar charts of three bars each (3-BCs) into the horizontal unit-height strip of minimal length. The bars of each 3-BC may move vertically within the strip, but it is forbidden to change the order and separate the bars. For this novel issue, which is a generalization of the NP-hard Two-Bar Charts Packing Problem (2-BCPP) considered earlier, we propose several approximation algorithms with guaranteed estimates.

12:00-12:20Coffee Break
12:20-13:40 Session 3A: Operational Research Applications I
12:20
On complexity of two-machine routing open shop problem

ABSTRACT. In the routing open shop problem a set of mobile machines has to traverse the given transportation network, to process immovable jobs located at its nodes and to return back to the initial location in the shortest time possible. The problem is known to be NP-hard even for the simplest case with two machines and two nodes. We consider a special case of this problem, in which processing time for any job does not depend on a machine. We prove that the problem in this simplified setting is still NP-hard for the same simplest case. To that end, we introduce the new problem we call 2-Summing and reduce it to the problem under consideration. On top of that we suggest a 7/6-approximation algorithm for the two-machine problem with at most three nodes.

12:35
On the routing open shop on a tree with preemptions allowed

ABSTRACT. The routing open shop problem is a natural generalization of the metric TSP and a classical open shop scheduling problem. Jobs are located at the nodes of a given transportation network, and mobile machines have to perform operations on those jobs while traveling over the edges. Machines are obligated to return to the initial location after completing all operations. The goal is to minimize the makespan. We consider the routing open shop on a tree-like transportation network with preemption in a general setting, where travel times are machine- and direction-dependent. For this problem we describe a wide polynomially solvable special case, for which the optimal makespan is guaranteed to coincide with the standard lower bound. To that end, we introduce a new problem setting with restricted preemption.

12:50
On Scheduler Problem in the Multi-User MIMO Framework

ABSTRACT. One of the key technologies in 5G wireless networks is Massive MIMO with Multi-user pairing possibility. We consider full buffer and bursty traffic models in such framework. Possible generalization of proportional fair and shortest remaining processing time first schedulers for MIMO are discussed. Under some assumptions we prove useful properties of optimal scheduler policies in both traffic models. Also some heuristic algorithms that show good performance in realistic scenarios are proposed.

13:05
On Flow Shop Scheduling with Job-Dependent Storage Requirements

ABSTRACT. We consider the two-machine flow shop with limited storage (a buffer), where each job needs a certain storage space from the start till the end of its processing. The storage requirement varies from job to job and is proportional to the length of the first operation. We are considering two options for loading the buffer. In the first variant, the job occupies the entire required space in the buffer from the start of the first operation until the completion of the second operation. In the second variant, the job occupies the buffer as the second operation is performed. The objective is to minimize makespan. We establish an exact borderline between the NP-hard and polynomial-time solvable cases.

13:25
On the restricted parking permit problem

ABSTRACT. We consider the online parking permit problem that is the natural generalization of the well-studied ski rental problem. Let's assume that on certain days we drive to work by car. On any driving days, we must obtain a valid parking permit. Whether we go by car or on foot depends on many factors, such as the weather, and it is not known in advance. Permits have different costs and are valid for a different number of days. Longer duration permits tending to cost less per day. If we drive every day, it is advantageous to take a permit for the longest period. If we drive only occasionally, it is more profitable to buy short duration permits.

In our paper we consider the parking permit problem with three types of permits. First we provide a lower bound on the competitive ratio for this problem that match the competitive ratio of the algorithm by Meyerson. Next we consider the problem under the learning augmented setting and propose algorithms which use predictions when making their decisions. We analyze how consistency and robustness of algorithms depend on the accuracy and reliability of predictions.

12:20-13:35 Session 3B: Operational Research Applications II
12:20
An Approximate Algorithm for a Two-Stage Transportation Problem

ABSTRACT. The paper considers a transportation problem with an uncertain demand, where storages must be supplied with goods provided that the demand is satisfied. It is assumed that we know the total demand distributed among the storages, but the distribution itself is unknown. At the first stage, we must choose the quantity of goods to bring from production facilities to the storages. After the initial quantities of goods are derived based on preliminary knowledge of the demand distribution, the actual values of the demand are revealed, and a deficit is formed in some of the storages. At the second stage, knowing the demand distribution, we must redistribute goods between the storages to remove deficits. Thus, we make a decision in conditions of incomplete information, and then we adjust it. The cost of transporting a unit of goods at the second stage is much higher than at the first one. It is required to decide how many goods to bring to the storages at the first stage so that the total costs at the two stages in the worst case of distribution of demand are minimal.

The problem can be formulated as a three-level linear programming problem. A heuristic has been developed to solve the second-level subproblem of finding the worst-case redistribution. Its efficiency is compared with the Gurobi solver, which solves the one-level reformulation of the problem. Testing has demonstrated the ability of the heuristic to stably find a solution close to the optimal one. Moreover, it is significantly ahead of the exact method in terms of computational time.

To solve the main problem, a local search algorithm has been developed, at each step of which the subproblem of finding the worst redistribution is solved. The results of the calculation using the model are compared with the common approach based on the independent optimization of the solutions of each of the stages.

12:35
Local search for the problem of routing a rented freight transport

ABSTRACT. The paper considers the routing problem that arises when planning an action for the separate collection of recyclables. It is assumed that the time intervals within which the recyclable materials must be collected are set, as well as a finite set of scenarios that determine the expected volumes of recyclables collected at each of the collection points. Rented trucks of various capacities are used to transport the recyclables for processing. The cost of renting a truck depends on its type and the duration of the rental. It is required to construct routes visiting all the collection points that must be common for all the scenarios and to determine the type of trucks in each of the scenarios in order to minimize the annual rental cost.

To solve the problem, various modifications of the local search were developed, such as simulated annealing, iterated local search, and others, as well as various heuristics for finding the initial solution. Their efficiency is compared with the Gurobi solver. The comparison was carried out on real data and artificial data demonstrating various patterns of the collection points location. Testing has shown that approximate methods are able to find a solution that is close to the optimal one, and much faster.

12:50
Approximation non-list scheduling Algorithms for Multiprocessors System

ABSTRACT. We consider the scheduling problem with tasks have to be executed on several parallel identical processors. The execution time of each task is known. Precedence constructions between tasks are represented by a directed acyclic task graph. Set of tasks is performed on parallel identical processors, any task can run on any processor and each processor can perform no more than one task at a time. Task preemption is not allowed. The number of processors $m$ is known and the goal is to minimize the total execution time - makespan. The multiprocessor scheduling problem is one of the classic NP-hard optimization problems. The paper proposes the approximation inserted idle time algorithm for the multiprocessor scheduling on parallel identical processors. We prove that deviation of this algorithm from the optimum is smaller than two times the makespan of optimal schedule. To illustrate the efficiency of our approach we tested it on randomly generated task graphs.

13:05
A POPMUSIC Approach for the Drilling Rig Routing Problem with Uncertainties

ABSTRACT. In this research, we continue our work on the threshold robustness in the Drilling Rig Routing Problem (DRRP) that we investigated earlier. There is a set of customers that are drilling sites requiring work on drilling wells. For each customer, we know the number of wells that need to be drilled within a given time interval, and it is allowed to partition the set of wells so that several drilling rigs perform the required work. The work on a well is unsplittable and must be performed by only one rig. A rig does not have to return to the depot it started from after it completes its work plan. This problem is close to an open uncapacitated multi-depot split delivery vehicle routing problem with time windows. The main difference with traditional SDVRP is that it is the service time that is split, not the demand, and there is only a limited number of studies on such problems.

Yet, in the real world, unforeseen circumstances can affect drilling time, and that, if disregarded, can lead to a disruption of the work plan. Thus, in this problem, we maximize possible deviations of drilling times from expected values when there is still a feasible solution to perform all well-drilling requests in time. At the same time, total traveling costs must be no more than a given threshold. It is a so-called threshold robustness approach.

We expand the Adaptive Large Neighborhood Search (ALNS) scheme developed earlier by applying a POPMUSIC approach. The underlying idea of this method is the decomposition of the original problem into smaller subproblems that one can solve (optimally or sub-optimally) through MILP models. The selection of a part of the solution is adapted to the specifics of the problem. MILP model for the subproblem is described. Extensive computational experiments are provided for the algorithm and Gurobi solver, which show the ability of the POPMUSIC method to sufficiently improve the results obtained by ALNS.

13:20
Analysis of ordinal data using the method of decision roots on the case of bankruptcy of companies

ABSTRACT. Based on the evaluation data of Russian real estate companies, the possibilities of data analysis using the decision roots mechanism are demonstrated. This is a novel approach to ordinal data analysis similar with the verbal decision analysis (VDA) approach in the field of unstructured problems with qualitative criteria. The foundations of the proposed method are set out in the works of devoted to the identification of integrated rating mechanisms (IRM). The work used a methodology, the main concepts of which are: convolution tree, convolution matrix, optimization polynomial, analysis of equivalence groups. Convolution tree – a binary tree whose leaves are the indicators used for the assessment of objects, and the root - integrated assessment, used for ranking. Considered case is from the area of assessing the viability of companies. Data from the valuation of real estate companies are used, depending on five parame-ters: accounts receivable, accounts payable, retained earnings, inventories, fixed assets. Initially, two hundred Russian construction companies were considered, of which one hundred were liquidated or are in the process of liquidation due to bankruptcy and one hundred economically prosperous companies for which bankruptcy cases were not opened. To implement the process of assessing the fi-nancial position in the work, the balance sheets of enterprises were used. As a re-sult of data processing, forty-eight examples remained, duplicate and contradicto-ry examples were previously removed, despite the fact that for the case in a ter-nary scale on five leaves, the complete set consists of two hundred forty-three ex-amples. Checking all possible one hundred and five structures of complete trees with named leaves revealed that the best approximation result for this problem implements forty five out of forty-eight examples, and there is no structure that implements all forty-eight examples. The best approximation of data is achieved on the structure: M1l3M2l4M3l5M4l1l2. Was found that there are no groups on scale three, as one could expect for uniform-scaled IRM. But from the allowed groups in scale five we can collect some structures. Using this methodology, we also studied: the design project evaluation case and the case of the Zenith football team. In all cases, the proposed approach makes it possible to find important de-pendencies in the data. The example under consideration shows that the proposed approach to the analysis of discrete data sets in qualitative or categorical assessments makes it possible to obtain a description of the initial data in the form of a mechanism. Us-ing equivalence group analysis and upscaling, we built several mechanisms for the data being described. This functionality is quite applicable to areas where DCE is traditionally used, such as marketing, health care and the environmental valuation.

13:45-14:45Lunch
15:00-16:45 Session 4A: Location Problems
15:00
Facility Location Problems as Sphere Covering or Packing Problems

ABSTRACT. The paper considers two dual problems: covering a closed convex set with equal spheres and packing the spheres into the set. To solve them, we propose numerical algorithms based on the optical-geometric approach. The algorithms' feature is replacement the distance between objects with the time of movement between them. This allows us to apply an approach based on the physical principles of light wave propagation. The proposed mathematical apparatus and software are useful for modeling since they allow us to consider some additional assumptions and restrictions. For the sensor location problem, we can describe the situation when the sensor operating area does not have the shape of a ball due to some reason. This can help improve the accuracy of the modeling and avoid the occurrence of blind spots. For the logistics facilities, the proposed approach allows one to assess their transport accessibility and attractiveness of the location more accurately. The results of the computational experiments are presented and discussed.

15:20
Ground Stations Location and Design for Software Defined Satellite Networks

ABSTRACT. In this paper, we formulate and suggest a solution method for the problem of ground stations location and design for Software Defined Satellite Networks (SDSN). In each potential site for ground station location, such a station can be created with one of the possible designs or may not be created at all. A design of a ground station is described by: the bandwidth of connections to the accessible satellites, the bandwidth of connections to other ground stations, the cost of utilizing this ground station design per unit of time. The system operates in discrete time frames, which are sufficiently short to assume that the quality of communication channels and the client demands do not change within a time frame. The data packet routes for each origin-to-destination pair are computed at Network Operations Control Center (NOCC).

The solution of the ground stations location and design problem can be divided into two stages: 1) construction of ground stations; 2) routing of information packets between clients during a certain time horizon. The decision variables of the first stage define a set of constructed ground stations and their design variants. On the second stage, in each time frame of the time horizon, the packet routes are computed, given the sessions requests and the current state of the graph of connections. When the packet routing solutions are known for all time-frames over the time horizon, the SDSN telecommunication system may be simulated as a communication network with packet commutation. The network states are modelled step by step. The quality of service on the time horizon is measured as the average number of lost or non-routed packets per unit of time. This is considered to be the maximization criterion. The overall budget for ground stations utilization over the given time horizon is imposed as a constraint.

We consider the issues of efficient modeling of packet transition as a part of objective function evaluation. A metaheuristic algorithm is proposed and experimentally evaluated on the input data, modelling one of the prospective SDSNs.

15:40
Computing an upper bound in the two-stage competitive location model

ABSTRACT. We consider a competitive facility location problem in the presence of uncertainty represented by a finite number of possible demand scenarios. The problem is formulated as a bi-level model built on the base of a Stackelberg game and a classic facility location model formalizing the players' decision process. In the bi-level model, the first player (Leader) has two options to open a facility. We assume that a Leader's facility could be opened either before the actual demand scenario is revealed or after the revelation. A fixed cost associated with the facility opening is lower in the first case. Thus the fixed costs could be reduced by making a preliminary location decision on the first stage and adjusting it on the second one.

We suggest a procedure to compute an upper bound for the Leader's profit value. The approach is based on using a family of auxiliary bi-level subproblems. Optimal solutions of the subproblems form a feasible solution of the initial problem. The upper bound is computed by applying a cut generation procedure to strengthen high-point relaxations of the subproblems.

15:55
Variable Neighborhood Search Approach for Two-Stage Location Problems

ABSTRACT. The report is devoted to the development of Variable Neighborhood Search approach for two 2-stage location problems. The first of them is a well-known two-stage uncapacitated facility location problem. The second problem arose when considering the issue of planning radio communication systems in the situation of the Far North. It turned out that the problem of placing radio transmitters can also be described by a variant of the two-stage location problem model. Search algorithms with variable neighborhoods for the indicated problems were constructed. Features of the presented developments, such as the structure of the neighborhoods, the rules for viewing them and other ideas are told. Experimental studies of the proposed algorithms and their results are discussed.

16:10
Local search for the problem of locating temporary recyclables collection points

ABSTRACT. The paper considers the location problem that arises when planning an action to collect recyclables. It is assumed that a set of possible places for deploying collection points, time intervals within which recyclables must be removed from the points, its cost, as well as a final set of scenarios that determine the expected volumes of recyclables collected at each of the points and are realized with a known probability are given. Rented trucks of various capacities are used to transport the recyclables collected to recycling facilities. The cost of renting a truck depends on its type and the duration of the rental.

It is required to find a set of deployed points to maximize the number of recyclable materials collected from all points, ensuring the self-repayment of the action. To find quality solutions to the problem, modification of the local search algorithm has been developed that use a surrogate model to quickly estimate transport costs for a given location. To control the quality of the solutions found for the routing problem, a mixed-integer programming model was developed.

Testing was carried out on real data provided by an auction held in the city of Novosibirsk, as well as on artificially generated data. All modifications have demonstrated the ability to find quality solutions for instances of size unreachable for the exact method in a reasonable time.

16:25
A genetic algorithm for the temporal bin packing problem

ABSTRACT. In this paper, we consider the temporal bin packing problem that originated from cloud computing (Ratushnyi, Kochetov, 2021). Suppose we are given a finite set of items and for each item, an arriving time, a processing time, and two weights (dimensions) are known. A certain subset of all items is called a set of large items. Each bin has two capacities (dimensions) and is divided into two identical parts. A large item should be divided into two identical parts and placed in both parts of a bin. Other items can be placed in one part of a bin, if there is enough capacity there in both dimensions. The goal is to pack all items into the minimum number of bins of identical dimensions. We propose a genetic algorithm for this problem and describe the results of computational experiment.

15:00-16:50 Session 4B: Continuous Optimization
15:00
Decentralized convex optimization under affine constraints for power systems control

ABSTRACT. Modern power systems are now in continuous process of massive changes. Increased penetration of distributed generation, usage of energy storage and controllable demand require introduction of a new control paradigm that does not rely on massive information exchange required by centralized approaches. Distributed algorithms can rely only on limited information from neighbours to obtain an optimal solution for various optimization problems, such as optimal power flow, unit commitment etc.

As a generalization of these problems we consider the problem of decentralized minimization of the smooth and convex partially separable function $f = \sum_{k=1}^l f^k(x^k,\tilde x)$ under the coupled $\sum_{k=1}^l (A^k x^k - b^k) \leq 0$ and the shared $\tilde{A} \tilde{x} - \tilde{b} \leq 0$ affine constraints, where the information about $A^k$ and $b^k$ is only available for the $k$-th node of the computational network.

One way to handle the coupled constraints in a distributed manner is to rewrite them in a distributed-friendly form using the Laplace matrix of the communication graph and auxiliary variables (Khamisov, CDC, 2017). Instead of using this method we reformulate the constrained optimization problem as a saddle point problem (SPP) and utilize the consensus constraint technique to make it distributed-friendly. Then we provide a complexity analysis for state-of-the-art SPP solving algorithms applied to this SPP.

15:15
Sufficient Conditions for the Joined Set of Solutions of an Overdetermined Interval System of Linear Algebraic Equations to Belong to Only One Orthant

ABSTRACT. Interval systems of linear algebraic equations (ISLAE) are considered in the context of constructing linear models from data with interval uncertainty. Sufficient conditions for boundedness and convexity of an admissible domain (AD) of ISLAE and its belonging to only one orthant of an $n$-dimensional space are proposed, which can be verified in polynomial time by the methods of computational linear algebra. In this case, AD ISLAE turns out to be a convex bounded polyhedron, entirely lying in the corresponding ortant. These properties of AD ISLAE allow us, firstly, to find solutions to the corresponding ISLAE in polynomial time by linear programming methods (while finding a solution to ISLAE of a general form is an NP-hard problem). Secondly, the coefficients of the linear model obtained by solving the corresponding ISLAE have an analogue of the significance property of the coefficient of the linear model, since the coefficients of the linear model do not change their signs within the limits of the AD. The formulation and proof of the corresponding theorem are presented. The error estimation and convergence of an arbitrary solution of ISLAE to the normal solution of a hypothetical exact system of linear algebraic equations are also investigated. An illustrative numerical example is given.

15:30
On the Convergence Analysis of Aggregated Heavy-Ball Method

ABSTRACT. Momentum first-order optimization methods are the workhorses in various optimization tasks, e.g., in the training of deep neural networks. Recently, Lucas et al. 2019 proposed a method called Aggregated Heavy-Ball (AggHB) that uses multiple momentum vectors corresponding to different momentum parameters and averages these vectors to compute the update direction at each iteration. Lucas et al. 2019 show that AggHB is more stable than the classical Heavy-Ball method even with large momentum parameters and performs well in practice. However, the method was analyzed only for quadratic objectives and for online optimization tasks under uniformly bounded gradients assumption, which is not satisfied for many practically important problems. In this work, we address this issue and propose the first analysis of AggHB for smooth objective functions in non-convex, convex, and strongly convex cases without additional restrictive assumptions. Our complexity results match the best-known ones for the Heavy-Ball method. We also illustrate the efficiency of AggHB numerically on several non-convex and convex problems.

15:45
An optimization approach to the robust control law design for a magnetic levitation system

ABSTRACT. This paper is devoted to the problem of robust control law design taking into account constraints imposed on the performance of the closed-loop nominal system. The magnetic levitation system is chosen as the control object, since its mathematical model is not accurate due to significant difficulties in describing the magnetic field, especially near the surface of the magnet. The proposed approach to control design is aimed at providing the best robust properties of the closed-loop system while maintaining acceptable performance indices of the nominal model. This aim is achieved by solving the optimization problem of expanding the frequency boundaries of robust stability on the admissible set, including regulators that provide the eigenvalues of the closed-loop system for nominal model in the given region at left half-plane. The specific feature of the developed controller is its multi-purpose structure that allows to achieve desired system performance in different regimes. It is shown that the solution of the stated optimization problem is reduced to unconstrained nonlinear programming. The obtained results is compared with linear quadratic regulator. The simulation examples are presented in MATLAB/Simulink environment and discussed.

16:00
Distributed Methods with Absolute Compression and Error Compensation

ABSTRACT. Distributed optimization methods are often applied to solving huge-scale problems like training neural networks with millions and even billions of parameters. In such applications, communicating full vectors, e.g., (stochastic) gradients, iterates, is prohibitively expensive, especially when the number of workers is large. Communication compression is a powerful approach to alleviating this issue, and, in particular, methods with biased compression and error compensation are extremely popular due to their practical efficiency. Sahu et al. (2021) propose a new analysis of Error Compensated SGD (EC-SGD) for the class of absolute compression operators showing that in a certain sense, this class contains optimal compressors for EC-SGD. However, the analysis was conducted only under the so-called $(M,\sigma^2)$-bounded noise assumption. In this paper, we generalize the analysis of EC-SGD with absolute compression to the arbitrary sampling strategy and propose the first analysis of EC-LSVRG with absolute compression for (strongly) convex problems. Our rates improve upon the previously known ones in this setting. Our theoretical findings are corroborated by several numerical experiments.

16:15
Application of the Subdifferential Descent Method to a Classical Nonsmooth Variational Problem

ABSTRACT. The paper is devoted to the classical variational problem with a nonsmooth integrand of the functional to be minimized. The integrand is supposed to be subdifferentiable. Under some natural conditions the subdifferentiability of the considered functional is proved. The problem of finding the subdifferential descent is being solved, and then the subdifferential descent method is applied to solve the original problem. The algorithm developed is demonstrated by examples.

16:30
On precoding algorithm for highly correlated users in Massive MIMO systems

ABSTRACT. In this talk a downlink Massive-MIMO scenario with a pair of highly correlated multiple antennas users is considered. We are looking for the optimal precoding (beamforming) in the sense of the sum of spectral efficiencies. In short the main idea is as follows. Firstly, we set several support vectors for the first Ue and consider their span V. Then we select the support vectors for the second Ue from the orthogonal complement to V so that beamforming gain is maximized. The proposed algorithm shows better results than the known baselines and the SU case. This result is also supported by theoretical proofs and confirmed on simulations.

16:45-17:00Coffee Break
17:00-18:35 Session 5A: Operational Research Applications
17:00
Optimizing DSO Requests Management Flexibility for Home Appliances using CBCC-RDG3

ABSTRACT. In this article, we consider the statement of a problem described in the IEEE PES GM, CEC \& GECCO 2021 Competition devoted to "Evolutionary Computation in the Energy Domain: Smart Grid Applications" \cite{web_gecco} dedicated to combining and testing more advanced Computational Intelligence (CI) techniques applied to energy domain problems, namely the Flexibility management of home appliances to support DSO requests. First of all, the main task is to provide for aggregators' flexibility in distribution networks by building an optimal schedule that takes advantage of load flexibility resources. This, in turn, allows for the re-scheduling of shifting/real-time home appliances to provision a request from a distribution system operator (DSO). To solve this problem we use several optimization methods to find the best possible schedule that would be quite efficient and consistent with all work-related constrains. There are many existing heuristic, metaheuristic, and exact algorithms that can solve this optimization problem; yet it is quite likely that the state-of-the-art methods currently available may benefit from improvement. For these reasons, taking into account the task at hand, we decided to use the CBCC-RDG3 algorithm \cite{CBCC_RDG3_original}. Other popular algorithms were carried out in comparison with our solution to successfully demonstrate the efficiency and effectiveness of this particular approach.

17:15
A matheuristic for the trailers waiting time minimization with uncertain arrival times

ABSTRACT. We present a new loading/unloading trailer scheduling problem for a logistics company. There is a building with several warehouses. Each warehouse stores pallets of different types of products in rooms for loading into trailers. Also, each warehouse has two gates. One gate is for the trailers, another one is for two forklifts from the central zone. At most one forklift can work in each warehouse at a time. The central zone is a production line. It produces some products which must be placed in the warehouses by the no wait rule. We know how the central zone works during each day, i.e. when each pallet becomes available for putting away. Outbound trailers come to pick some pallets of a product. Inbound trailers come to put away some pallets of a product. We assume that the arrival time for each trailer is uncertain. Our goal is to assign all trailers to warehouses and find scheduling for service all the trailers with the maximum stability radius, taking into account that the total waiting time for the trailers cannot exceed the given threshold. For this NP-complete problem, we design a two- stage matheuristic. First of all, we solve the simplified model using the Gurobi solver. Later on, the VNS algorithm is used to return the solution into the feasible region taking into account detailed information about pallets in each warehouse. Computational results for 6 warehouses, 18 types of products, and 90 trailers are discussed.

17:35
Variational Autoencoders for Precoding Matrices with High Spectral Efficiency

ABSTRACT. Neural networks are used for channel decoding, channel detection, channel evaluation, and resource management in multi-input and multi-output (MIMO) wireless communication systems. In this paper, we consider the problem of finding precoding matrices with high spectral efficiency (SE) using variational autoencoder (VAE). We propose a computationally efficient algorithm for sampling precoding matrices with minimal loss of quality compared to the optimal precoding. In addition to VAE, we use the conditional variational autoencoder (CVAE) to build a unified generative model. Both of these methods are able to reconstruct the distribution of precoding matrices of high SE by sampling latent variables. This distribution obtained using VAE and CVAE methods is described in the literature for the first time.

17:55
Global Search in Optimization Problems with DC Inequality Constraints

ABSTRACT. We consider the nonconvex optimization problem with DC inequality constraints. Using the exact penalization theory, we reduce the original problem to a penalized one without constraints. The latter problem turns out to be the DC minimization problem and can be solved using the Global Search algorithm, which consists of two basic stages: a special local search method and procedures of escaping from a critical point based on Global Optimality Conditions. The local search method (LSM) employs a consecutive solution of linearized problems combined with a change of penalty parameter. Computational testing of developed methods has been carried out on test problems. The effectiveness of the developed method is not only in escaping critical points provided by LSM but in getting global solutions.

18:10
Minimizing Makespan for Parallelizable Jobs with Energy Constraint

ABSTRACT. We investigate the problem of scheduling parallelizable jobs to minimize the makespan under the given energy budget. A parallelizable job can be run on an arbitrary number of processors with a job execution time that depends on the number of processors assigned to it. We consider malleable and moldable jobs. Processors can vary their speed to conserve energy using dynamic speed scaling. Polynomial time algorithms with approximation guarantees are proposed. In our algorithms, a lower bound on the makespan and processing times of jobs are calculated. Then numbers of utilized processors are assigned for jobs and a feasible solution is constructed using a list-type scheduling rule.

18:25
Branch-and-Cut Algorithm for the Precedence Constrained Generalized Traveling Salesman Problem

ABSTRACT. The Precedence Constrained Generalized Traveling Salesman Problem (PCGTSP) is an extension of the well-known Generalized Traveling Salesman Problem (GTSP), where feasible tours are restricted to visit all the clusters with respect to some given partial order. Unlike the GTSP, to the best of our knowledge, the PCGTSP is studied rather weakly both in terms of polyhedral theory and algorithms’ design and implementation. In this paper, by extending of the seminal Fischetti’s inductive approach, we establish dimension of the PCGTSP polytope and prove sufficient conditions that allow us to lift the facet-inducing inequalities proposed by E.Balas for the Precedence Constrained Asymmetric TSP polytope to the case of PCGTSP. Relying on these theoretical results, we design the first branch-and-cut algorithm for the PCGTSP and implement it in the context of the Gurobi user callbacks framework. Results of the numerical evaluation against the public PCGTSPLIB benchmark library show that proposed algorithm outperforms both the state-of-the-art MIP solver Gurobi with default setting of cutting planes and the known branch-and-bound and dynamic programming algorithms for PCGTSP, even in the case, where all competing algorithms are equipped with the same MIP-start solution.

17:00-18:00 Session 5B: Optimal Control (Online Session)
17:00
Optimal control of output variables within a given range based on a predictive model

ABSTRACT. This paper is devoted to the problem of digital control design to keep the output variables of the controlled process in a given range. Such a problem is of particular importance in control practice if it is only necessary to maintain some variables of a dynamic process in a stated range, but not to track the reference signal. The control design approach is developed for a nonlinear digital model. This approach is based on the predictive model and a special introduced cost functional that is minimized over the prediction horizon. This functional include two terms: the first term represent intensity of the control actions, the second term is a penalty for violating the specified range. It is shown that the optimal control design at each instant of discrete time is reduced to nonlinear programming problem. This problem always has a solution due to the addmissible set, which includes additional variables that guarantee the existence of the solution even in the case of violating the given range. The application of the proposed approach for oil refining in the distillation column is given. The examples of process simulation are presented and discussed.

17:15
Synthesis of motion control of rod heating sources with optimization of measurement points of the process state

ABSTRACT. The paper proposes an approach to solving the problem of synthesis of controlling the heating process of a rod by pointwise sources moving along the rod. The current values of control actions on the movement of sources are determined depending on the temperature at the measurement points, the location of which is being optimized. It is proposed to use a linear dependence of the control actions by the movement of sources on the measured temperature values. The constant coefficients involved in these dependencies are the desired feedback parameters. Thus, the problem of control synthesis for moving heat sources is reduced to the problem of parametric optimal control. Necessary conditions for the optimality of the feedback parameters are obtained. The formulas for the components of the functional gradient involved in these conditions make it possible to solve the problem numerically using first-order optimization methods. The results of computer experiments are given.

17:30
Multi-agent systems with central control

ABSTRACT. A problem of terminal control on a fixed time interval is considered. In the interval, a linear controlled dynamics develops. By choosing a control, the system of differential equations generates a phase trajectory that describes some dynamic process. Discretization of the time interval is carried out, when the initial interval is divided into n subintervals. Sections of the phase trajectory at discretization points are considered. In the finite-dimensional spaces corresponding to these points, linear programming problems are additionally formulated, and the values of the phase trajectory at discretization points must be the solutions of these problems. In the problem, it is required to choose a control from some convex closed set so that the phase trajectory corresponding to this control passes through the optimal solutions of all linear programming problems. This problem is interpreted as a mathematical model for control a multi-agent system that is controlled from a single center. In the model, the connections between agents represented by linear programming problems (for example, matrix connections) are not yet available, but they can easily be taken into account in this rather flexible system. The model belongs to complex optimal control problems. In the paper, gradient saddle-point methods are developed for calculating the solution. The methods are guaranteed to converge to the exact solution of the original problem. The substantiation of convergence was carried out within the framework of evidence-based methodology. The latter means that convergence takes place in all components of the solution, i.e., we have weak convergence in control, and strong convergence in phase and conjugate trajectories, as well as in terminal variables of intermediate problems.

17:45
Optimal control in a cluster-based hierarchical epidemic model

ABSTRACT. In the current study, we formulate a modification of a classical Susceptible-Infected-Recovered (SIR) spread model that describes the epidemic process in the clustered human population. The assessment of the global spread of Novel Coronavirus (COVID -19) leads to the notion of a hierarchical structure for the whole population. In general, the population can be considered to be divided into three possible clusters: a city, a country, and a world.

At each level of the hierarchical system, the spread of the pathogen follows the Susceptible-Quarantined-Exposed-Infected-Recovered-Dead (SQEIRD) model. It is assumed that an epidemic starts in the first cluster. The migration process between levels then triggers the epidemic in the second and third clusters. Furthermore, each level of the hierarchy is defined by its network.

Controlling the spread of the pathogen can be considered as an optimal control problem. We construct the structure of the optimal control problem to minimize the cost incurred by the epidemic in the hierarchical epidemic model and analyze the properties of the key parameters such as the basic reproduction number and the steady states. A series of numerical experiments confirms the obtained results.