Discrete Math and Optimization Seminar, TU Delft

Welcome!

Group seminars occur regularly on Fridays. There is also a Google calendar with the schedule.

Seminars of 2018

Nearly ETH-Tight Algorithms for Planar Steiner Tree with Terminals on Few Faces
Speaker: Erik Jan van Leeuwen (U. Utrecht)
When and Where: Friday, 7/12/2018, at 15:00, Hilbert room

Abstract:

In this talk, we survey some of the recent techniques to solve parameterized optimization problems on planar graphs, with a focus on the Steiner Tree problem. Many problems on planar graphs, such as deciding whether an \(n\)-vertex graph has a vertex cover of size at most \(k\), are known to be decidable in \(2^{O(\sqrt{k})} n^{O(1)}\) time, while such subexponential-time algorithms do not exist on general graphs, unless the Exponential Time Hypothesis fails. The standard techniques to obtain subexponential-time algorithms on planar graphs, however, are not applicable to the Steiner Tree problem. In the talk, I will describe the recent progress on this problem for various relevant parameters before focusing on the case that all terminals can be covered by the boundary of \(k\) faces. Erickson et al. [Math. Oper. Res., 1987] showed that this problem can be solved in \(n^{O(k)}\) time and \(n^{O(k)}\) space. In the past 30 years there has been no significant improvement of this algorithm, despite several efforts. In recent joint work with Sandor Kisfaludi-Bak and Jesper Nederlof, accepted to SODA 2019, we show a subexponential-time algorithm for this problem with running time \(2^{O(k)} n^{O(\sqrt{k})}\) using only polynomial space. Furthermore, we show that the running time of the algorithm is almost tight: we prove that there is no \(f(k) n^{o(\sqrt{k})}\) time algorithm for any computable function \(f\), unless the Exponential Time Hypothesis fails.

Non-Urgent Patient Transfer Scheduling
Speaker: Travis Foster (Dalhousie University, Canada)
When and Where: Friday, 16/11/2018, at 15:00, Hilbert room

Abstract:

In Canadian and Dutch healthcare, Emergency Medical Services (EMS) are responsible for responding to urgent and non-urgent requests for transportation. As such, both countries utilize a system where ambulances are prioritized for urgent calls and secondary vehicles are used for non-urgent calls. Ambulances will help with non-urgent requests when urgent call volumes are low. However, with urgent and non-urgent call volumes increasing, ambulances are less likely to be available for non-urgent calls. EMS departments are looking to increase utilization of the secondary vehicles to deliver excellent service to their patients. Efficient scheduling of the secondary vehicles is one way to help maximize their utilization.

This seminar will examine non-urgent patient transfer operations and scheduling, the Dial-A-Ride vehicle routing problem with a focus on minimizing total travel time and its implementation in Python. In addition, potential approaches to dealing with new calls during day-of operations will be reviewed.

Maximum Likelihood Decoding for Noise Channels with Gain and/or Offset Mismatch
Speaker: Renfei Bu (TU Delft)
When and Where: Friday, 16/11/2018, at 14:00, Hilbert room

Abstract:

We consider the communication or storage channels that are not only noisy, but those that also distort the message by unknown (to both sender and receiver) gain and offset mismatch. The problem of achieving the optimal decoding over these channels is a fundamental and challenging one.

Given a received vector \(r\), the maximum likelihood (ML) decoding picks the most likely codeword \(\hat{x}\) which maximizes the probability that \(r\) is received, given that \(\hat{x}\) was sent. This scheme is the optimal decoding if all codewords are equally likely to be sent.

In this talk, I will present a general ML decoding framework for Gaussian noise channel with gain and offset mismatch, which encompasses some existing conventional decoding criteria. ML decoding for the bounded noise and offset case will be considered. In particular, for several decoding criteria including ML decoding, bounds are determined on the magnitudes of the noise and offset intervals which lead to a zero word error rate. Finally, I will introduce the channel model with the varying mismatch, and discuss some recent progress in the research.

Detection and Coding for Noisy Channels with Gain and/or Offset Mismatch
Speaker: Jos Weber (TU Delft)
When and Where: Friday, 9/11/2018, at 15:00, Hilbert room

Abstract:

Besides the omnipresent noise, other important inconveniences in communication and storage systems are formed by gain and/or offset mismatches. In flash memories, for instance, these are caused by charge leakage. The recently introduced Pearson distance based detection method is immune to unknown offset or gain, but this virtue comes at the cost of a higher noise sensitivity than the traditional Euclidean distance based detector.

In this lecture, first an extensive introduction to gain/offset mismatch issues is given. Then, maximum likelihood (ML) decision criteria are presented for (Gaussian) noise channels suffering from unknown gain and/or offset mismatches. These are compared to Euclidean and Pearson distance based detectors. Furthermore, constructions and properties are analysed for codes which have been designed to cope with the requirements imposed by the Pearson distance. Several performance analysis results are discussed as well.

Predictor-corrector interior-point algorithms for sufficient linear complementarity problems
Speaker: Tibor Illés (Budapest University of Technology)
When and Where: Friday, 26/10/2018, at 15:00, Hilbert room

Abstract:

The algebraic equivalent transformation (AET) of the system which defines the central path has been introduced by Darvay (2003) for linear programming problem resulting new search directions for interior point algorithms (IPA). Generalization of AET for some types of IPAs for sufficient linear complementarity problems (SU-LCP) can cause difficulties, especially for predictor-corrector (PC) ones. After overcoming these difficulties, we introduce new PC-IPAs for SU-LCPs. Moreover, we present a unified discussion of the effect of different AETs on proposing and analysing new IPAs for SU-LCPs.

These new PC IPAs from complexity analysis point of view possess similar properties like IPAs with the best know complexity, namely have polynomial iteration complexity in κ, the dimension n and the bit length L of the problem.

There are several known results (Fukuda and Terlaky, 1992; Hertog et al. 1993; Valiaho, 1996; Fukuda et al. 1998; P. Tseng 2000; de Klerk and Eisenberg-Nagy 2011) that discuss properties of the sufficient matrices and explain why it is difficult to generate, recognize (decide) and use these matrices. Furthermore, some algorithms (Csizmadia and Illés, 2006; Illés et al. 2000, 2007, 2009, 2010; Csizmadia et al. 2013, etc.) have been developed that do not need apriori information about the matrix properties. These algorithms could be applied either to solve the LCPs or its dual efficiently or give a polynomial size certificate that the matrix is not sufficient.

Recently, Illés and Morapitiye (2018) have introduced, some new ways of generating sufficient matrices making possible, for the first time, to test the computational performance of IPAs on sufficient LCPs, with positive κ parameter. Our, new PC IPAs have been tested on a well-known, triangular, P-matrix designed by Zs. Csizmadia (2011) with κ that is at least exponentially large in the dimension n of the problem. Computational performance of a variant of our, new PC IPA has not been affected by this fact, namely the number of iterations for this particular algorithm is not exponential in the dimension n. Unfortunately, two other variants of the new PC IPAs do show exponential iteration performance.

Classifying C. albicans species by using mixed integer optimization based optimal classification trees
Speaker: Mick van Dijk (CWI)
When and Where: Friday, 12/10/2018, at 15:00, Hilbert room

Abstract:

Worldwide medical use of the antifungal azole has led to an enormous increase in azole resistant C. albicans species, which is most commonly associated with fungal infections. A possible reason for resistance are point mutations on the ERG11 gene. To provide patient specific medication it would be beneficial to be able to, in silico, classify the fungal isolates as being resistant or susceptible and identify the locations of the responsible point mutations. The aim of this thesis is to apply and compare several classification algorithms, in particular decision tree algorithms. Bertsimas and Dun recently introduced a novel formulation based on Mixed Integer Optimization to generate optimal classification trees. We have implemented this method and applied it to the C. albicans data set to construct univariate and multivariate classification trees. Moreover, by adding extra constraints and variables to the original formulation we are able to model extensions such as minimizing false negative misclassifications and putting bounds on hyperplane supports in multivariate classification.

Parallel Machine Scheduling with a Single Resource per Job
Speaker: Celine Swennenhuis (TU Delft)
When and Where: Friday, 5/10/2018, at 15:00, Hilbert room

Abstract:

The presentation will address the problem of scheduling jobs on parallel machines minimizing the total completion time, with each job using exactly one resource. We look at a variant of the shortest processing time rule as an approximation algorithm for the problem and show that it gives at least a \((2-\frac{1}{m})\)-approximation. Subsequently, we derive fundamental properties of the optimal solutions of the problem. Finally, we show that, although the complexity of the problem remains open, a related problem is NP-hard. In this related problem, every resource also has a subset of machines on which it can be used. This has as consequence that the problem found in practice in the semiconductor industry is also NP-hard.

Display graphs and treewidth applied to problems in phylogenetics
Speaker: Mark Jones (TU Delft)
When and Where: Friday, 28/9/2018, at 15:00, Hilbert room

Abstract:

A core problem in phylogenetics consists of reconciling several incompatible phylogenetic trees into a simple phylogenetic network, representing the full evolutionary history of a set of a species. Any set of trees can be explained by a sufficently complex network; the challenge is to find a network that is as close to being "tree-like" as possible.

A common approach for solving problems in on tree-like networks is dynamic programming on a tree decomposition, for networks of small treewidth. This is a powerful technique, which we would like to apply to phylogenetics. The problem is that this approach assumes that the network is know, whereas in our case the network is exactly what we find.

A possible stepping stone to using dynamic programming techniques is the concept of display graphs, which represent the structure of multiple phylogenetic trees at once. It is known that if two trees a compatible with each other then their display graph has bounded treewidth. In this talk I will discuss some recent progress in this area, and the potential for future dynamic programming approaches on problems in phylogenetics.

(This talk is partly based on joint work with Remie Janssen, Steven Kelk, Georgios Stamoulis, and Taoyang Wu.)

On the integrality gap of the maximum-cut SDP relaxation in fixed dimension
Speaker: Fernando M. de Oliveira Filho (TU Delft)
When and Where: Friday, 21/9/2018, at 15:00, Hilbert room

Abstract:

For fixed \(k \geq 4\) it is an open problem whether a rank-\(k\) solution of the maximum-cut semidefinite programming relaxation can be rounded to a rank-1 solution with a better approximation ratio than that achieved by the Goemans-Williamson algorithm. I will discuss this problem and present a factor-revealing optimization problem whose optimal value is exactly the best possible approximation ratio for rounding a rank-\(k\) solution. I will then show how this problem can be used to compute upper bounds on the best approximation ratio.

(Joint work with Frank Vallentin.)

Integral crew scheduling approach to calculate the impact of night flights on crew productivity.
Speaker: Maarten Vonk (TU Delft)
When and Where: Friday, 14/9/2018, at 15:00, Hilbert zaal, 2.W.510

Abstract:

Traditionally airline planning happens in several stages. First the flight schedule is created where the flight destination and exact flight times are determined. Then the scheduled flights are merged with other flights to make pairings, which are potential work packages for crew members conform labor and law requirements. Finally these pairings are allocated to individual crew members to create roster-lines, a complete sequence of pairings and individual tasks satisfying all labor requirements. This project aims to tackle these problems integrally, by creating a model that takes the flight schedule as input and calculates the minimum amount of crew members necessary to realize the schedule. Because these problems are computationally prohibitive, several methods are exploited to reduce computation time such as a layered network structure and column generation.

Finally the model will be tested on various flight schedules, where the schedules range in their amount of night flights. Because night flights have stricter rest requirements, more crew members might be necessary to perform a schedule containing many night flights.

Optimization of battery usage in smart grids
Speaker: Eveline de Swart (TU Delft)
When and Where: Friday, 7/9/2018, at 15:00, Colloquiumzaal, ground floor 0.E420

Abstract:

Over a century from now, most of the earths fossil fuel supply will have been burned up and humankind will become reliant on renewable sources for their energy generation. The amount of energy that is generated by burning fossil fuels is directly dependent on the amount of fuel that is being burned, which is a factor that can be influenced. However, with renewable energy sources, the amount that is being generated depends on factors that cannot be influenced, such as the weather. Thus, if the demand for energy suddenly increases, it will no longer be possible to increase the generation of energy to meet this demand. This is where smart grids come into play, with the use of smart appliances, private renewable energy sources, and battery storage the usage of energy can be steered in a manageable direction.

The implementation of private renewable energy sources has increased largely in recent years. The excess energy that is being generated by these sources can currently be send back into the grid and this amount will be settled with the amount of energy that is bought. Due to this, the acquisition of a private battery to store excess energy is not profitable. However, within a few years a feed-in tariff will be in place to determine the reward for selling energy back into the grid. When this tariff is in place, the acquisition of a battery will become profitable, provided that the usage is optimized.

This optimized usage of a private battery in a smart grid is the focus point of my master thesis. I am researching different methods to determine the most profitable strategy, given expected demand and generation by private renewable energy source.

A Fast Polynomial-time Primal-Dual Projection Algorithm for Linear feasibility problems
Speaker: Zhang Wei (National University of Singapore)
When and Where: Friday, 29/6/2018, at 15:00, Turing room

Abstract:

Traditionally, the interior point method is the most widely used polynomial time algorithm for linear programming. Since the algorithm derived by Chubanov, the projection and rescaling type algorithm has become a potentially practical class of polynomial algorithms for the linear feasibility problems and the more general linear programming problems. It is found that, practically, this kind of algorithms usually performs better on the infeasible instances than on the feasible instances.

To address this issue, we explicitly develop the dual algorithm, and then combine it with the primal algorithm, resulting in a primal-dual projection algorithm. Numerical experiments show that this primal-dual scheme can be competitive on the linear feasibility problems.

Scheduling surgery groups considering multiple downstream resources
Speaker: Theresia van Essen (TU Delft)
When and Where: Tuesday, 26/6/2018, at 16:00, Hilbert room

Abstract:

When scheduling surgeries, not only the operating room (OR) restrictions are important to take into account, but also the length of stay (LoS) of patients at the IC unit and wards. Recent research has proposed methods to evaluate OR schedules on the number of required beds, and building on this, these methods have been used in an optimization procedure. However, the common approach in literature is to schedule OR blocks which leaves many options for surgery scheduling and this increases the variation in usage of both the OR and downstream beds. Therefore, we improve this approach by scheduling surgery groups which reduces the options for operational scheduling.

We propose both a single step mixed integer linear programming (MILP) approach which approximates the objective function and a simulated annealing approach which uses the original objective function. Both approaches are compared on a real-life data set and results show that the MILP performs best in terms of solution quality and computation time. Furthermore, the results show that our model may improve the OR utilization by 14% and decrease the bed usage variation by 42 beds compared to historical data.

User Equilibria in Multimodal Space-Time Networks
Speaker: Lianne Bruijns (TU Delft)
When and Where: Friday, 22/6/2018, at 15:45, Turing room

Abstract:

When solving transport problems, one can minimize the total cost of the network, in other words, a System Optimal (SO) is found. But instead of minimize the total costs, it can be more desirable to keep the costs for each customer in the network as low as possible. When concerning this goal, we actually want to achieve a User Equilibrium (UE), a solution in which all customers on the network are satisfied with the solution (the costs they need to pay/the traveltime they get).

However, congestion can occur in the network, for example when the best route for multiple customers is to travel their containers via a certain barge with limited capacity. Then it is not possible for all customers to take their cheapest route and some customers are forced to take an alternative route.

In a SO solution however, it can be the case that one customer has to pay a big amount of money to transport his containers (due to a long route, expensive modes), in order to offer other customers cheap routes such that the total costs are minimized.

In a UE this is not a desirable solution, because that first customer will certainly not be satisfied with the cost he has to pay. A UE solution often have higher total costs than SO solutions.

For transport companies working as intermediaries between customers and transporters (such as barges, trains and trucks), it is desirable to keep the total costs as low as possible ( = SO) but they also need to satisfy their customers ( = UE).

In my research, I am investigating the options of adding `fair' tolls to a Space Time Network, in other words redistribute the costs over the customers such that when assigning containers to certain routes, both an SO solution is obtained regarding the total costs and the customers are satisfied, such that we achieve an SO as well as UE. I wil compare adding tolls to paths to adding tolls to customers (orders).

Polynomial-Time Algorithms for Phylogenetic Inference Problems
Speaker: Yuki Murakami (TU Delft)
When and Where: Friday, 8/6/2018, at 15:45, Turing room

Abstract:

A common problem in phylogenetics is to try to infer a species phylogeny from gene trees. We consider different variants of this problem. The first variant, called Unrestricted Minimal Episodes Inference, aims at inferring a species tree based on a model of speciation and duplication where duplications are clustered in duplication episodes. The goal is to minimize the number of such episodes. The second variant, Parental Hybridization, aims at inferring a species network based on a model of speciation and reticulation. The goal is to minimize the number of reticulation events. It is a variant of the well-studied Hybridization Number problem with a more generous view on which gene trees are consistent with a given species network. We show that these seemingly different problems are in fact closely related and can, surprisingly, both be solved in polynomial time, using a structure we beaded . However, we also show that methods based on these problems have to be used with care because the optimal species phylogenies always have some restricted form. To overcome this problem, we introduce a new variant of Unrestricted Minimal Episodes Inference that minimizes the duplication episode depth. We prove that this new variant of the problem can also be solved in polynomial time.

Profit optimization in express networks
Speaker: Casper van Dijk (TU Delft)
When and Where: Friday, 25/5/2018, at 15:45, Hilbert room

Abstract:

Parcel delivery companies offer time-guaranteed transportation of parcels, letters and packages, picked up at one customer and delivered at another. The time in which this has to be done depends on the service level that the customer pays for. To transport the parcels, a network is used consisting of facilities and links, where the facilities have either a regional collection and distribution function, or an inter-regional processing function. The market share that a company has depends on the price and service time the company offers the customers in comparison with the offer of competitors in the market. Optimizing the total profit, the total revenue minus the total cost, is an important objective for these companies. The classical approach to this, is to first determine the prices for the different services, which in turn determine the demand, and then to minimize the costs in the network. This project is aimed at finding a solution approach that integrates this process, and in this way finds better quality solutions in a shorter amount of time. The chosen approach uses efficient formulations and a general purpose MIP solver for relatively easy problems, and a local search algorithm based on local branching for more difficult problems.

Advance/Long-term multi-priority surgery scheduling, while minimizing cancellations and rescheduling to fulfill desired access times.
Speaker: Saskia Vertregt (TU Delft)
When and Where: Friday, 18/5/2018, at 15:45, Dijkstrazaal, EWI (N.B.: this is another building!)

Abstract:

The gynecology department at the Leiden University Medical Center (LUMC) distinguishes three groups of patients with different priorities. Priority group 1 has the highest urgency and should be treated within two weeks. The department wants to treat priority 2 patients within four weeks and priority 3 patients within three months. This large difference in access times is the reason for this project. Priority 3 patients are not urgent, but want to know their surgery data well in advance, and priority 1 and 2 patients want to be treated as soon as possible. Currently, the scheduling of patients is done one or two weeks in advance and the set access times are not always fulfilled. The goal of this research is to make the scheduling process better for both the hospital and the patients. By increasing the scheduling horizon, the patients are earlier informed about when their surgical procedure is performed, and by maximizing the utilization, the hospital is making the most of their available operating room time.

We have looked at a few different methods to solve the multi priority elective patient scheduling problem. We have used a Markov decision process, that would make a scheduling decision based on the priority of the new arrived patient and the previously scheduled patients per priority and week. Unfortunately, the computer resources were not powerful enough to compute the solution for the size of our problem. Now, we defined a dynamic MIP method, that schedules all three patient groups in a different manner.

The patients of priority 2 and 3 are scheduled with the use of a MIP and priority 1 patients with a first-in-first-out policy on the first day with enough available capacity. The MIP is formulated so that the schedule made by the MIP contains reserved time slots for patients with higher or equal urgency. A scheduling policy(first/best fit) decides which time slot from the schedule is assigned to the arrived patient of priority 2 or 3.

Multivariate positive definite functions on a sphere with application to the kissing number problem
Speaker: Olga Kuryatnikova (Tilburg University)
When and Where: Friday, 20/4/2018, at 15:45, Hilbert room

Abstract:

By the seminal theorem of I.J. Schoenberg, a bivariate continuous function on the unit sphere is invariant under the automorphisms of the sphere and positive definite if and only if it can be represented as a positive linear combination of Gegenbauer polynomials. We generalize the notion of positive definiteness to the notion of 2-PSD for multivariate functions and provide an extension of Schoenberg's theorem for such functions. A multivariate function is called 2-PSD if it is positive definite when all but the first two variables are fixed to any possible value. We show that a multivairate continuous function on the unit sphere is invariant under the automorphisms of the sphere and 2-PSD if and only if it can be represented as a convergent series produced by a sequence of products of positive definite functions and Gegenbauer polynomials. We use this result to formulate an SDP for a hierarchy of upper bounds on the kissing number, i.e., the maximum number of non-overlapping unit spheres that can simultaneously touch another unit sphere. The presentation will be self-contained.

The surprising links between Simulated Annealing and Interior Point Methods
Speaker: Etienne de Klerk (TU Delft)
When and Where: Friday, 9/3/2018, at 15:45, Hilbert room

Abstract:

In this talk I will describe a surprising link between interior point methods and simulated annealing, that was first described by Abernethy and Hazan [Faster Convex Optimization: Simulated Annealing with an Efficient Universal Barrier, arXiv: 1507.02528, 2015].

It turns out that, for convex optimization, simulated annealing is in fact equivalent to an interior point method using a specific self-concordant barrier function, namely the entropic barrier, introduced by Bubeck and Eldan [The entropic barrier: a simple and optimal universal self-concordant barrier. arXiv:1412.1587, 2014.] This opens the possibility of implementing certain interior point methods using sampling techniques, as opposed to exact (expensive) calculations of Hessians of barrier functions.

I will give some preliminary results along these lines, based on the paper:

Etienne de Klerk, Francois Glineur, and Adrien Taylor, Worst-case convergence analysis of gradient and Newton methods through semidefinite programming performance estimation, arXiv:1709.05191, 2017.

Lower bounds on matrix factorization ranks via noncommutative polynomial optimization
Speaker: Sander Gribling (CWI)
When and Where: Friday, 16/2/2018, at 15:45, Hilbert room

Abstract:

In this talk I will present an overview of some of the results of [1,2], both are joint work with David de Laat and Monique Laurent. The basic problem we consider is the following: given an entrywise nonnegative \(m\)-by-\(n\) matrix \(M\), what is the smallest dimension in which we can find vectors \({x_1, \ldots,x_m, y_1, \ldots, y_n}\) such that \(M_{ij} = \langle x_i,y_j\rangle\)? Different restrictions on the vectors lead to different factorization ranks. In particular, we consider the nonnegative rank, the positive semidefinite rank, and their symmetric analogues (where \(m=n\) and \(x_i = y_i\)): the completely positive rank and the completely positive semidefinite rank. To motivate the study of these ranks I will explain the connections to linear programming and quantum information theory. We then proceed to formulate lower bounds on these factorization ranks using techniques from (tracial) polynomial optimization. Finally we show how the same techniques (plus one new idea) lead to a new parameter of a bipartite quantum correlation: the minimal average entanglement dimension.

[1] Lower bounds on matrix factorization ranks via noncommutative polynomial optimization, arXiv:1708.01573.

[2] Bounds on entanglement dimensions and quantum graph parameters via noncommutative polynomial optimization, arXiv:1708.09696.

Routing in intermodal transport
Speaker: Kishan Kalicharan (TU Delft)
When and Where: Friday, 19/1/2018, at 15:00, Hilbert Room W2.510

Abstract:

Intermodal transport is about transporting containers with multiple types of modes such as barges, trucks and trains. An objective can be trying to minimize the cost of routing containers. Some of the problems in intermodal container transport can be modelled by an Integer Linear Program similar to the multi commodity flow problem. Mathematical tools such as cutting planes, Dantzig-Wolfe decomposition, Lagrangian relaxation, column generation and more can be used to reduce the computation time, making the solution methods usable in practice.