Research Seminar

The Research Seminar is a weekly seminar with talks by group members, guests and students on recent results in combinatorial optimization, algorithm design and related fields. Everyone is very welcome. If you have suggestions for talks or wish to present your own interesting results, or if you would like to subscribe to the list for talk announcements, feel free to send an email to the organizer, Jens Schlöter, or register here.

Talks will either take place in-person in building MZH at the University of Bremen, or virtually as a video conference (For details, see the individual announcements down below). If you are interested in attending a particular talk, please contact Jens Schlöter or register here.

Upcoming Talks

Sven Jäger (RPTU Kaiserslautern-Landau)

April 8, 2024 at 01:00 PM in room MZH 3150 and virtually over zoom

Title: Simple Approximation Algorithms for Minimizing the Total Weighted Completion Time of Precedence-Constrained Jobs

Abstract: We consider the precedence-constrained scheduling problem to minimize the total weighted completion time. For a single machine several 2-approximation algorithms are known, which are based on linear programming and network flows. We show that the same ratio is achieved by a simple weighted round-robin rule. Moreover, for preemptive scheduling on identical parallel machines, we give a strongly polynomial 3-approximation, which computes processing rates by solving a sequence of parametric flow problems. This matches the best known constant performance guarantee, previously attained only by a weakly polynomial LP-based algorithm. Our algorithms are both also applicable in non-clairvoyant scheduling, where processing times are initially unknown. In this setting, our performance guarantees improve upon the best competitive ratio of 8 known so far.

This is joint work with Philipp Warode.

Past Talks

Mik Zlatin (CMU)

February 20, 2024 at 01:45 PM virtually over zoom

Title: Approximation Algorithms for Augmenting Steiner Network Connectivity

Abstract: Given a k-edge-connected graph G, how do we add a cheapest set of links to G so that it becomes (k+1)-edge-connected? This is the Connectivity Augmentation Problem (CAP) and is a fundamental problem in network design and combinatorial optimization. Primal-dual methods and iterative rounding achieve an approximation ratio of 2 for CAP, which was the state of the art until recent breakthroughs by Traub and Zenklusen, who brought the approximation ratio down to 1.5+epsilon using a local greedy method.

However, in many applications, we are not interested in the connectivity between all nodes in the network, but rather only between a subset of important nodes called terminals, while intermediary "Steiner" nodes simply exist to ensure connectivity between these terminals. In this talk, I will discuss generalizations of the Connectivity Augmentation Problem to this Steiner setting and discuss the challenges that arise when working with non-global connectivity requirements. Finally, I will demonstrate how to deal with these challenges and leverage the local greedy technique to obtain the first algorithms with approximation ratio better than 2 for this case.

Sander Borst (CWI)

January 30, 2023 at 09:00 AM virtually over zoom

Title: Selection in Explorable Heaps

Abstract: The problem of finding the nth smallest number in a collection is known as the selection problem. But what if the collection of numbers is not directly accessible to us? This is what explorable heap selection is about. It is the problem of selecting  the nth smallest value in a binary heap, where the values can only be accessed by traversing the binary tree. In this setting we want to minimize the distance traveled in the tree. The problem was originally proposed by Karp, Saks and Widgerson (FOCS '86), who also proposed algorithms for this problem. Recently we found a new algorithm, that significantly improves on the running time. In this talk I will explain the problem, our new algorithm and the connection between the problem and the branch-and-bound algorithm for solving integer programs.

Moritz Buchem (TU Munich)

January 25, 2024 at 2:15 PM in room MZH 1470 

Title: Improved approximation algorithms for (variants of) the minimum sum of radii problem

Abstract: Clustering is a fundamental problem setting with applications in many different areas. For a given set of points in a metric space and an integer $k$, we seek to partition the given points into $k$ clusters. For each computed cluster, one typically defines one point as the center of the cluster. A natural objective is to minimize the sum of the cluster center's radii, where we assign the smallest radius $r$ to each center such that each point in the cluster is at a distance of at most $r$ from the center. The best-known polynomial time approximation ratio for this problem is $3.389$. In the setting with outliers, i.e., we are given an integer $m$ and allow up to $m$ points that are not in any cluster, the best-known approximation factor is $12.365$.

In this paper, we improve both approximation ratios to $3+\epsilon$. Our algorithms are primal-dual algorithms that use fundamentally new ideas to compute solutions and to guarantee the claimed approximation ratios. For example, we replace the classical binary search to find the best value of a Lagrangian multiplier $\lambda$ by a primal-dual routine in which $\lambda$ is a variable that is raised. Also, we show that for each connected component due to almost tight dual constraints, we can find one single cluster that covers all its points and we bound its cost via a new primal-dual analysis. We remark that our approximation factor of $3+\epsilon$ is a natural limit for the known approaches in the literature.

Then, we extend our results to the setting of lower bounds. There are algorithms known for the case that for each point $i$ there is a lower bound $L_{i}$, stating that we need to assign at least $L_{i}$ clients to $i$ if $i$ is a cluster center. For this setting, there is a $3.83$-approximation if outliers are not allowed and a $12.365$-approximation with outliers. We improve both ratios to $3.5 + \epsilon$ and, at the same time, generalize the type of allowed lower bounds.

This is joint work with Katja Ettmayr, Hugo Kooki Kasuya Rosado and Andreas Wiese (published at SODA 2024).

Daniel Neuen (University of Bremen)

December 13, 2023 at 10:00 AM in room MZH 3150 and virtually over zoom

Title: Compressing CFI Graphs and Lower Bounds for the Weisfeiler-Leman Refinements

Abstract: The k-dimensional Weisfeiler-Leman (k-WL) algorithm is a simple combinatorial algorithm that was originally designed as a graph isomorphism heuristic. It naturally finds applications in Babai's quasipolynomial time isomorphism algorithm, practical isomorphism solvers, and algebraic graph theory. However, it also has surprising connections to other areas such as logic, proof complexity, combinatorial optimization, and machine learning.


The algorithm iteratively computes a coloring of the k-tuples of vertices of a graph. Since Fürer's linear lower bound [ICALP 2001], it has been an open question whether there is a super-linear lower bound for the iteration number for k-WL on graphs. In this talk, I present a novel method of compressing CFI graphs that leads to an \Omega(n^{k/2})-lower bound for all k.

Joint work with Martin Grohe, Moritz Lichter and Pascal Schweitzer (published at FOCS 2023).

 

Alexander Lindermayr (University of Bremen)

December 06, 2023 at 10:00 AM in room MZH 3150 and virtually over zoom

Title: Santa Claus meets Makespan and Matroids: Algorithms and Reductions

Abstract: In this paper we study the relation of two fundamental problems in scheduling and fair allocation: makespan minimization on unrelated parallel machines and max-min fair allocation, also known as the Santa Claus problem. For both of these problems the best approximation factor is a notorious open question; more precisely, whether there is a better-than-2 approximation for the former problem and whether there is a constant approximation for the latter.
While the two problems are intuitively related and history has shown that techniques can often be transferred between them, no formal reductions are known. We first show that an affirmative answer to the open question for makespan minimization implies the same for the Santa Claus problem by reducing the latter problem to the former. We also prove that for problem instances with only two input values both questions are equivalent.
We then move to a special case called ``restricted assignment'', which is well studied in both problems. Although our reductions do not maintain the characteristics of this special case, we give a reduction in a slight generalization, where the jobs or resources are assigned to multiple machines or players subject to a matroid constraint and in addition we have only two values. This draws a similar picture as before: equivalence for two values and the general case of Santa Claus can only be easier than makespan minimization. To complete the picture, we give an algorithm for our new matroid variant of the Santa Claus problem using a non-trivial extension of the local search method from restricted assignment. Thereby we unify, generalize, and improve several previous results. We believe that this matroid generalization may be of independent interest and provide several sample applications.

This is joint work with Étienne Bamas, Nicole Megow, Lars Rohwedder and Jens Schlöter.

Leoni Winschermann (University of Twente)

December 05, 2023 at 10:00 AM in room MZH 3150 and virtually over zoom

Title: Relating Electric Vehicle Charging to Speed Scaling with Job-Specific Speed Limits

Abstract: Due to the ongoing electrification of transport in combination with limited power grid capacities, efficient ways to schedule electric vehicles (EVs) are needed for intraday operation of, for example, large parking lots. In this talk, we introduce the practical problem setting, as well as commonly used approaches like model predictive control. Such approaches repeatedly solve an offline scheduling problem to determine the next control action.
Then, we present and analyze the Flow-based Offline Charging Scheduler (FOCS), an offline algorithm to derive an optimal EV charging schedule for a fleet of EVs that minimizes an increasing, convex and differentiable function of the corresponding aggregated power profile. To this end, we relate EV charging to processor speed scaling models with job-specific speed limits.

Chien-Chung Huang (École Normale Supérieure, Paris)

November 14, 2023 at 10:00 AM in room MZH 3150 and virtually over zoom

Title: Matroid-Constrained Maximum Vertex Cover

Abstract: We introduce the problem of Matroid-Constrained Vertex Cover: given a graph with weights on the edges and a matroid imposed on the vertices, our problem is to choose a subset of vertices that is independent in the matroid, with the objective of maximizing the total weight of covered edges. This problem is a generalization of the much studied max k-vertex cover problem, in which the matroid is the simple uniform matroid.
This problem is APX-hard and also W[1]-hard (with k being the paramter). Nonethess, in this work, by two different techniques, we are able to design Fixed-Parameter Tractable Approximation Schemes (FPT-AS).

This is joint-work with François Sellier at École des Mines.

Júlia Baligács (TU Darmstadt)

November 07, 2023 at 10:30 AM virtually over zoom

Title: Exploration of Graphs with Excluded Minors

Abstract: We study the online graph exploration problem proposed by Kalyanasundaram and Pruhs (1994) and prove a constant competitive ratio on minor-free graphs. This result encompasses and significantly extends the graph classes that were previously known to admit a constant competitive ratio. The main ingredient of our proof is that we find a connection between the performance of the particular exploration algorithm Blocking and the existence of light spanners. Conversely, we exploit this connection to construct light spanners of bounded genus graphs. In particular, we achieve a lightness that improves on the best known upper bound for genus g ≥ 1 and recovers the known tight bound for the planar case (g = 0).

This is joint work with Yann Disser, Irene Heinrich and Pascal Schweitzer and was presented at ESA 2023.

Kim Klause (University of Bremen)

September 25, 2023 at 11:00 AM in room MZH 3150 and virtually over zoom

Title:  A Multi-Commodity Routing Problem: Using the Transitive Closure to Minimize Sortation Expenses (master's thesis defense)

Felix Hommelsheim (University of Bremen)

June 27, 2023 at 01:15 PM in room MZH 3150 and virtually over zoom

Title: Recoverable Robust Optimization with Commitment

Abstract: We propose a model for recoverable robust optimization with commitment. Given a combinatorial optimization problem and uncertainty about elements that may fail, we ask for a robust solution that, after the failing elements are revealed, can be augmented in a limited way. However, we commit to preserve the remaining elements of the initial solution. We consider different polynomial-time solvable combinatorial optimization problems and settle the computational complexity of their robust counterparts with commitment. Somewhat surprisingly, we show for the weighted matroid basis problem that an optimal solution to the nominal problem is also optimal for its robust counterpart. Indeed, matroids are provably the only structures with this strong property. Robust counterparts of other problems are NP-hard such as the matching and the stable set problem, even in bipartite graphs. However, we establish polynomial-time algorithms for the robust counterparts of the unweighted stable set problem in bipartite graphs and the weighted stable set problem in interval graphs, also known as the interval scheduling problem.


Joint work with Nicole Megow, Komal Muluk, and Britta Peis.

Lene Monrad Favrholdt (University of Southern Denmark)

May 16, 2023 at 01:00 PM virtually over zoom

Title: Online Minimum Spanning Trees with Weight Predictions

Abstract: We consider the minimum spanning tree problem with predictions, using the weight-arrival model, i.e., the graph is given from the beginning, together with predictions for the weights of all edges. The actual weights arrive one at a time and, for each weight, an irrevocable decision must be made regarding whether or not the corresponding edge should be included into the spanning tree. In order to assess the quality of our algorithms, we define an appropriate error measure and analyze the performance of the algorithms as a function of the error.  We prove that, according to competitive analysis, the simplest algorithm, Follow-the-Predictions (FtP), is optimal.  However, intuitively, one should be able to do better, and we present a greedy variant of FtP.  It turns out that, according to random-order analysis, the greedy version is strictly better.
We believe we present the first random-order analysis of a non-trivial online algorithm with predictions.  This may be useful for distinguishing between algorithms for other problems where the FtP algorithm is optimal according to competitive analysis.
Joint work with Joan Boyar, Kim S. Larsen, and Magnus B. Pedersen.

Daniel Neuen (University of Bremen)

May 9, 2023 at 12:00 AM in room MZH 3150 and virtually over zoom

Title: Parameterized Algorithms for Graph Isomorphism - Decompositions via Regularity

Abstract: In the first part of the talk, I will give an overview of recent advances on the graph isomorphism problem focusing on quasi-polynomial parameterized algorithms with a running time of the form n^{polylog(k)}, where k is some graph parameter. The first such algorithm was obtained for k being the maximum degree of the input graphs [FOCS 2018] by extending the group-theoretic tools of Babai's quasi-polynomial time isomorphism test [STOC 2016]. Subsequently, this result was generalized in a series of works to all graphs that exclude the complete graph on k vertices as a topological subgraph [SODA 2022]. In particular, this includes all graphs of Euler genus at most k-7 as well as all graphs of tree-width at most k-2.

The key concept underlying this latest algorithm is the notion of t-CR-bounded graphs that serves as a gateway to combine the group-theoretic tools underlying the isomorphism test for bounded-degree graphs with decomposition-based approaches for graphs that exclude the complete graph on k vertices as a topological subgraph. In the second part of the talk, we explore the graph-theoretic side of the algorithm in more depth and discuss how regularity of the input graphs can be leveraged to obtain decompositions into parts with few symmetries that can then be handled by the group-theoretic graph isomorphism machinery.

Svenja M. Griesbach (TU Berlin)

April 27, 2023 at 09:15 AM in room MZH 3150 and virtually over zoom

Title: Improved Approximation Algorithms for the Expanding Search Problem

Abstract: A searcher faces a graph with edge lengths and vertex weights, initially having explored only a given starting vertex. In each step, the searcher adds an edge to the solution that connects an unexplored vertex to an explored vertex. This requires an amount of time equal to the edge length. The goal is to minimize the vertex-weighted sum of the exploration times over all vertices. We show that this problem is hard to approximate and provide algorithms with improved approximation guarantees. For the general case, we give a (2e + ε)-approximation for any ε > 0. For the case that all vertices have unit weight, we provide a 2e-approximation. Finally, we provide a PTAS for the case of a Euclidean graph. Previously, for all cases only an 8-approximation was known. Joint work with Felix Hommelsheim, Max Klimm, and Kevin Schewior.

Konstantinos Dogeas (Durham University) 

April 19, 2023 at 04:30 PM in room MZH 3150 and virtually over zoom

Title Scheduling with predictions

Abstract Using machine-learned predictions to create algorithms with better approximation guarantees is a very fresh and active field. In this talk, we’ll go through classic scheduling problems and try to improve their bounds using the learning augmented setting. More specifically, we’ll see classical settings under the objective of minimizing the sum of completion times and focus on the problem of scheduling jobs with arbitrary release dates on a single machine. In the new learning augmented setting, an algorithm which uses predictions to take decisions is both consistent — i.e. when the predictions are accurate, the performances of our algorithms are close to those of an optimal offline algorithm–, and robust — i.e. when the predictions are wrong, the performance of our algorithms are close to those of an online algorithm without predictions.

Chhaya Trehan (University of Bristol) 

April 19, 2023 at 04:30 PM in room MZH 3150 and virtually over zoom

Title (1+ε)-Approximate Shortest Paths in Dynamic Streams

Abstract  Computing approximate shortest paths in the dynamic streaming setting is a fundamental challenge that has been intensively studied during the last decade. Currently existing solutions for this problem either build a sparse multiplicative spanner of the input graph and compute shortest paths in the spanner offline, or compute an exact single source BFS tree. Solutions of the first type are doomed to incur a stretch-space tradeoff of 2κ−1 versus n1+1/κ, for an integer parameter κ. (In fact, existing solutions also incur an extra factor of 1+ϵ in the stretch for weighted graphs, and an additional factor of logO(1)n in the space.) The only existing solution of the second type uses n1/2−O(1/κ) passes over the stream (for space O(n1+1/κ)), and applies only to unweighted graphs. In this paper we show that (1+ϵ)-approximate single-source shortest paths can be computed in this setting with Õ (n1+1/κ) space using just constantly many passes in unweighted graphs, and polylogarithmically many passes in weighted graphs (assuming ϵ and κ are constant). Moreover, in fact, the same result applies for multi-source shortest paths, as long as the number of sources is O(n1/κ). We achieve these results by devising efficient dynamic streaming constructions of (1+ϵ,β)-spanners and hopsets. We believe that these constructions are of independent interest. Joint work with Michael Elkin.

Kevin Schewior (University of Southern Denmark) 

January 25, 2023 at 09:00 AM in room MZH 3150 and virtually over zoom

Title Knapsack Secretary Through Boosting

Abstract We revisit the knapsack-secretary problem (Babaioff et al.; APPROX 2007), a generalization of the classic secretary problem in which items have different sizes and multiple items may be selected if their total size does not exceed the capacity $B$ of a knapsack. Previous works show competitive ratios of $1/(10e)$ (Babaioff et al.), $1/8.06$ (Kesselheim et al.; STOC 2014), and $1/6.65$ (Albers, Khan, and Ladewig; APPROX 2019) for the general problem but no definitive answers for the achievable competitive ratio; the best known impossibility remains $1/e$ as inherited from the classic secretary problem. In an effort to make more qualitative progress, we take an orthogonal approach and give definitive answers for special cases.
 

Our main result is on the $1$-$2$-knapsack secretary problem, the special case in which $B=2$ and all items have sizes $1$ or $2$, arguably the simplest meaningful generalization of the secretary problem towards the knapsack secretary problem. Our algorithm is simple: It $\textit{boosts}$ the value of size-$1$ items by a factor $\alpha>1$ and then uses the size-oblivious approach by Albers, Khan, and Ladewig. We show by a nontrivial analysis that this algorithm achieves a competitive ratio of $1/e$ if and only if $1.40\lesssim\alpha\leq e/(e-1)\approx 1.58$.
 

Towards understanding the general case, we then consider the case when sizes are $1$ and $B$, and $B$ is large. While it remains unclear if $1/e$ can be achieved in that case, we show that algorithms based only on the relative ranks of the item values can achieve precisely a competitive ratio of $1/(e+1)$. To show the impossibility, we use a non-trivial generalization of the factor-revealing linear program for the secretary problem (Buchbinder, Jain, and Singh; IPCO 2010).

Elias Pitschmann (University of Bremen)

January 17, 2023 at 09:00 AM in room MZH 3150 and virtually over zoom

Title Prophet Inequalities over Time

Abstract We introduce an over-time variant of the well-known prophet inequality with i.i.d. random variables. Instead of stopping with one realized value at some point in the process, we decide for each step how long we select the value. Then we cannot select another value until this period is over. The goal is to maximize the expectation of the sum of selected values. We describe the structure of the optimal stopping rule and give upper and lower bounds on the prophet-inequality. -- Which, in online algorithms terminology, corresponds to bounds on the competitive ratio of an online algorithm.

We give a surprisingly simple algorithm with a single threshold that results in a prophet inequality of \approx 0.396 for all input lengths n. Additionally, as our main result, we present a more advanced algorithm resulting in a prophet inequality of \approx 0.598 when the number of steps tends to infinity. We complement our results by an upper bound that shows that the best possible prophet inequality is at most 1/(golden ratio) \approx 0.618. As part of the proof, we give an advanced bound on the weighted mediant.

Stefan Walzer (Universität zu Köln) 

December 20, 2022 at 09:00 AM in room MZH 3150 and virtually over zoom

Title Simple Linear Set Sketching

Abstract Imagine a hash table with n buckets into which m keys are inserted. However, instead of using a collision resolution strategy we combine keys that hash to the same bucket using xor. This seems like a terrible idea since there is no way to obtain x and y from x⊕y. This strategy becomes useful if each key is stored in three independent positions. If the load m/n is less than ≈0.81 then the set of keys can be reconstructed with high probability. This way of obtaining a data structure f(X) from a set X can be viewed as linear set sketching. “Sketching” means that the sizes m and n of X and f(X) can be selected independently, which makes f(X) a lossy encoding of X if m << n. “Linear” means that f(X △ Y) = f(X) ⊕ f(Y) where △ denotes symmetric difference of sets and ⊕ denotes bucket-wise xor of the sketches. Linearity is an appealing property in streaming and distributed settings. For instance, given a stream of insertions and deletions consider the task of reporting in the end the set D of elements that have been inserted but not deleted. We obtain a very simple algorithm with optimal space and time requirements up to small constants for this problem. Our work can be understood as a simplification of invertible Bloom filters.

Felix Hommelsheim (University of Bremen) 

December 07, 2022 at 09:30 AM in room MZH 3150 and virtually over zoom

Title Matching Augmentation via Simultaneous Contractions

Abstract We consider the matching augmentation problem (MAP), where a matching of a graph needs to be extended into a 2-edge-connected spanning subgraph by adding the minimum number of edges to it. We present a polynomial-time algorithm with an approximation ratio of 13/8 = 1.625 improving upon an earlier 5/3-approximation. The improvement builds on a new \alpha-approximation preserving reduction for any \alpha \geq 3/2 from arbitrary MAP instances to well-structured instances that do not contain certain forbidden structures like parallel edges, small separators, and contractible subgraphs. We further introduce, as key ingredients, the technique of repeated simultaneous contractions and provide improved lower bounds for instances that cannot be contracted.

Vijayaragunathan Ramamoorthi (University of Bremen) 

November 16, 2022 at 09:00 AM in room MZH 3150 and virtually over zoom

Title Dominating sets on Uncertain Graphs.

Abstract In the real world, many complex objects are modeled as graphs. For instance, social networks, protein interaction networks, and chemical compounds are modeled as graphs. In some cases, uncertainty is inherent in these networks for various reasons. It is convenient to model such networks with uncertain data as {\em uncertain graphs}. An uncertain graph, denoted by $\mathtt{M}athcal{G}$, is a triple that consists of a vertex set $V$, an edge set $E$ and a probability function $p:E\to [0,1]$ that assigns for each $e \in E$ an existential probability $p(e)$. An uncertain graph $\mathtt{M}athcal{G}$ implicitly denotes the probability space of its edge subgraphs called possible worlds. A distribution model uniquely determines a probability distribution of the possible worlds. In this thesis, we study the budgeted dominating set problem on uncertain graphs with two well-studied distribution models, namely -- (1) linear reliability ordering (LRO) model and independent distribution (IP) model.

Given a vertex-weighted uncertain graph $\mathtt{M}athcal{G}=(V,E,p,w)$, a set $S \subseteq V$ and a vertex $u \in V$, the expected domination by the set $S$ on the vertex $u$, denoted by $\mathtt{M}athcal{C}(u,S)$, is the product of the weight $w(u)$ and the probability that $u$ has a neighbor in $S$. The expected domination by the set $S$ on $V$ is $\mathtt{M}athcal{C}(V,S) = \sum_{u \in V}\mathtt{M}athcal{C}(u,S)$. The computation of the function $\mathtt{M}athcal{C}$ is specific to the distribution models. We refer to the budgeted dominating set problem on uncertain graphs with the distribution model $\mathtt{M}$ as the \emph{probabilistic budgeted dominating set}-$\mathtt{M}$ (PBDS-$\mathtt{M}$) problem. For a distribution model $\mathtt{M}$, the input to the PBDS-$\mathtt{M}$ problems consists of a vertex-weighted uncertain graph $\mathtt{M}athcal{G} = (V, E, p, w)$, an integer $k$, and a target domination value $t \in \mathtt{M}athbb{Q}$. The objective is to find whether there exists a $k$ sized vertex subset $S$ whose expected domination on $V$ is at least $t$.

First, we study the PBDS-IP problem on uncertain graphs. We show that the PBDS-IP problem is NP-complete even on uncertain trees with a diameter of at most four. Moreover, the PBDS-IP problem is W[1]-hard for the parameter solution size on trees.
We complement the hardness result by providing an FPTAS for the same on uncertain trees. Then, we consider a variant of the above problem where all the edge probabilities are identical.
We show that the problem is W[1]-hard for the parameter pathwidth of the input uncertain graph. We complement our hardness result by providing an FPT algorithm for the combined parameters treewidth of the input uncertain graph and the solution size.

Next, we study the PBDS-LRO problem on uncertain graphs. Initially, we consider that the vertex demands are non-negative. We show that the PBDS-LRO problem admits an FPT~algorithm for the parameter treewidth of the input uncertain graph.
Then, we use the above result and obtain an FPT~algorithm for the same problem on planar uncertain graphs for the parameter solution size. In contrast, we show that if the vertex demands can be negative values, the problem turns out to be W[1]-hard for the parameter vertex cover number of the input uncertain graph. We complement the lower bound by proof that the problem is FPT~being parameterized by the treewidth and the maximum vertex degree of the input uncertain graph.
 

Malin Rau (Hamburg/Bremen) 

October 26, 2022 at 09:00 AM in room MZH 3150

Title Asynchronous Opinion Dynamics in Social Networks

Abstract Opinion spreading in a society decides the fate of elections, the success of products, and the impact of political or social movements.
A prominent model to study opinion formation processes is due to Hegselmann and Krause. It has the distinguishing feature that stable states do not necessarily show consensus, i.e., the population of agents might not agree on the same opinion.

We focus on the social variant of the Hegselmann-Krause model. There are $n$ agents, which are connected by a social network. Their opinions evolve in an iterative, asynchronous process, in which agents are activated one after another at random. When activated, an agent adopts the average of the opinions of its neighbors having a similar opinion (where similarity of opinions is defined using a parameter $\eps$). Thus, the set of influencing neighbors of an agent may change over time. To the best of our knowledge, social Hegselmann-Krause systems with asynchronous opinion updates have only been studied with the complete graph as social network.

We show that such opinion dynamics are guaranteed to converge for any social network. We provide an upper bound of $\Oh(n\lvert E\rvert ^2 (\eps/\delta)^2)$ on the expected number of opinion updates until convergence to a stable state, where $\lvert E\rvert$ is the number of edges of the social network, and $\delta$ is a parameter of the stability concept. For the complete social network we show a bound of $\Oh(n^3(n^2 + (\eps/\delta)^2))$ that represents a major improvement over the previously best upper bound of $\Oh(n^9 (\eps/\delta)^2)$.

Theo Thiery (Queen Mary University of London) 

October 19, 2022 at 11:00 AM in room MZH 3150

Title A sqrt(3)-approximation for Weighted 3-Dimensional Matching

Abstract In this talk, we will be solving a basic and fundamental combinatorial optimization problem, known as Weighted k-Set Packing. Given a collection of sets each of which contains up to k elements, we would like to find a collection of sets that are pairwise non-intersecting. One can picture the problem simply as finding a maximum weight matching in a hypergraph where hyperedges have up to k vertices. For k=3, this problem is famously known for being NP-complete and listed in Karp's 21 NP-complete problems. Hence, we resort to approximation algorithms.

For 20 years, the best approximation factor for this problem was equal to (k+1)/2 due to Berman until a recent improvement equal to (k+1)/2 - 10**(-8). For the special case of Weighted 3-Dimensional Matching, it implies an approximation factor equal to 1.9999999. In this talk, I will describe how to largely improve over the state-of-the-art result to obtain an approximation factor equal to sqrt(3) = 1.7321 for k=3. 

The plan is to give a friendly introduction to the problem and present an (arguably) beautiful proof of Berman's algorithm, which we will then use as a foundation to obtain further improvement. 

Lars Rohwedder (Maastricht) 

August 24, 2022 at 01:00 PM in room MZH 6200

Title Minimizing Weighted Flow Time on a Single Machine

Abstract Flow time is a scheduling objective that is notoriously difficult to optimize. I will briefly review recent developments and open questions in the three flow time related (offline) problems that I find most intriguing: minimizing the maximum flow time on unrelated machines, minimizing the maximum flow time on identical machines, and minimizing the weighted sum of flow times on a single machine.
For the last problem, a series of papers recently settled its complexity completely by exhibiting a PTAS (joint work with Alexander Armbruester and Andreas Wiese). For the rest of the talk I will give a full proof of a constant approximation in pseudopolynomial time (which can be turned into polynomial time with a known reduction) and I will sketch the ideas involved in improving this to a PTAS.

Lin Chen (Texas Tech University)

July 12, 2022 at 09:00 AM in room MZH 6200

Title Approximation Algorithms for Interdiction Problem with Packing Constraints

Abstract We study a bilevel optimization problem which is a zero-sum Stackelberg game. In this problem, there are two players, a leader and a follower, who pick items from a common set. Both the leader and the follower have their own (multi-dimensional) budgets, respectively. Each item is associated with a profit, which is the same to the leader and the follower, and will consume the leader's (follower's) budget if it is selected by the leader (follower). The leader and the follower will select items in a sequential way: First, the leader selects items within the leader's budget. Then the follower selects items from the remaining items within the follower's budget. The goal of the leader is to minimize the maximum profit that the follower can obtain. A special case of our problem is the bilevel knapsack problem studied by Caprata et al. [SIAM Journal on Optimization, 2014], where the dimension of the leader's budget and the follower's budget are both 1. We obtain the following results: (i) If the dimension of the follower's budget is 1 and the dimension of the leader's budget is a constant, then we obtain a PTAS. This is the first PTAS even for the special case of bilevel knapsack. (ii) If the dimension of the follower's budget and the leader's budget are both constant,  we give an $O(1)$-approximation algorithm. (iii) If the dimension of the follower's budget and the leader's budget are both part of the input,  we obtain an $O(1)$-approximation algorithm with $O(1)$-resource augmentation, that is, we give an algorithm that returns a solution which exceeds the given leader's budget by $O(1)$ times, and whose objective value is $O(1)$ times of the optimal objective value where the leader's budget is respected.

Alexander Lindermayr (University of Bremen)

July 05, 2022 at 10:15 AM in room MZH 3150

Title Permutation Predictions for Non-Clairvoyant Scheduling

Abstract In non-clairvoyant scheduling, the task is to find an online strategy for scheduling jobs with a priori unknown processing requirements with the objective to minimize the total (weighted) completion time. We revisit this well-studied problem in a recently popular learning-augmented setting that integrates (untrusted) predictions in online algorithm design.
While previous works used predictions on processing requirements,
we propose a new prediction model, which provides a relative order of jobs which could be seen as predicting algorithmic actions rather than parts of the unknown input. We show that these predictions have desired properties, admit a natural error measure as well as algorithms with strong performance guarantees and that they are learnable in both, theory and practice. We generalize the algorithmic framework proposed in the seminal paper by Kumar et al. (NeurIPS'18) and present the first learning-augmented scheduling results for weighted jobs and unrelated machines. We demonstrate in empirical experiments the practicability and superior performance compared to the previously suggested single-machine algorithms.
This is joint work with Nicole Megow.

Vijay Ramamoorthi (IIT Madras)

June 28, 2022 at 10:15 AM, virtually over Zoom

Title Complexity of Domination under Uncertainty

Abstact In the real world, many complex objects are modeled as graphs. For instance, social networks, protein interaction networks, and chemical compounds are modeled as graphs. In some cases, uncertainty is inherent in these networks for various reasons. It is convenient to model such networks with uncertain data as uncertain graphs. An uncertain graph, denoted by $\mathtt{M}athcal{G}$, is a triple that consists of a vertex set $V$, an edge set $E$ and a probability function $p:E\to [0,1]$ that assigns for each $e \in E$ an existential probability $p(e)$. An uncertain graph $\mathtt{M}athcal{G}$ implicitly denotes the probability space of its edge subgraphs called possible worlds. A distribution model uniquely determines a probability distribution of the possible worlds. In this thesis, we study the budgeted dominating set problem on uncertain graphs with two well-studied distribution models, namely -- (1) linear reliability ordering (LRO) [HassinRS09,HassinRS17] and independent distribution (IP) [KhanYC18] model.

Given a vertex-weighted uncertain graph $\mathtt{M}athcal{G}=(V,E,p,w)$, a set $S \subseteq V$ and a vertex $u \in V$, the expected domination by the set $S$ on the vertex $u$, denoted by $\mathtt{M}athcal{C}(u,S)$, is the product of the weight $w(u)$ and the probability that $u$ has a neighbor in $S$. The expected domination by the set $S$ on $V$ is $\mathtt{M}athcal{C}(V,S) = \sum_{u \in V}\mathtt{M}athcal{C}(u,S)$. The computation of the function $\mathtt{M}athcal{C}$ is specific to the distribution models. We refer to the budgeted dominating set problem on uncertain graphs with the distribution model $\mathtt{M}$ as the probabilistic budgeted dominating set $\mathtt{M}$ (PBDS-$\mathtt{M}$) problem. For a distribution model $\mathtt{M}$, the input to the PBDS-$\mathtt{M}$ problems consists of a vertex-weighted uncertain graph $\mathtt{M}athcal{G} = (V, E, p, w)$, an integer $k$, and a target domination value $t \in \mathtt{M}athbb{Q}$. The objective is to find whether there exists a $k$ sized vertex subset $S$ whose expected domination on $V$ is at least $t$.

First, we study the PBDS-LRO problem on uncertain graphs. Initially, we consider that the vertex demands are non-negative. We show that the PBDS-LRO problem admits an FPT algorithm for the parameter treewidth of the input uncertain graph.
Then, we use the above result and obtain an FPT algorithm for the same problem on planar uncertain graphs for the parameter solution size. In contrast, we show that if the vertex demands can be negative values, the problem turns out to be W[1]-hard for the parameter vertex cover number of the input uncertain graph. We complement the lower bound by proof that the problem is FPT being parameterized by the treewidth and the maximum vertex degree of the input uncertain graph.

Next, we study the PBDS-IP problem on uncertain graphs. We show that the PBDS-IP problem is NP-complete even on uncertain trees with a diameter of at most four. Moreover, the PBDS-IP problem is W[1]-hard for the parameter solution size on trees.
We complement the hardness result by providing an FPTAS for the same on uncertain trees. Then, we consider a variant of the above problem where all the edge probabilities are identical.
We show that the problem is W[1]-hard for the parameter pathwidth of the input uncertain graph. We complement our hardness result by providing an FPT algorithm for the combined parameters treewidth of the input uncertain graph and the solution size.


HassinRS09 -- Hassin, R., R. Ravi, and F. S. Salman, Tractable Cases of Facility Location on a Network with a Linear Reliability Order of Links. Springer Berlin Heidelberg, Berlin, Heidelberg, 2009. ISBN 978-3-642-04128-0, 275–276.

HassinRS17 -- Hassin, R., R. Ravi, and F. S. Salman (2017). Multiple facility location on a network with linear reliability order of edges. Journal of Combinatorial Optimization, 1–25. ISSN 1573-2886.

KhanYC18 -- Khan, A., Y. Ye, and L. Chen, On Uncertain Graphs. Synthesis Lectures on Data Management. Morgan & Claypool Publishers, 2018. URL doi.org/10.2200/S00862ED1V01Y201807DTM048 <https://doi.org/10.2200/S00862ED1V01Y201807DTM048>.

Max Deppert (University of Kiel)

June 27, 2022 at 04:00 PM, virtually over Zoom

Title Load Balancing: The Long Road from Theory to Practice

Abstract There is a long history of approximation schemes for the problem of scheduling jobs on identical machines to minimize the makespan. Such a scheme grants a (1+ϵ)-approximate solution for every ϵ>0, but the running time grows exponentially in 1/ϵ. For a long time, these schemes seemed like a purely theoretical concept. Even solving instances for moderate values of ϵ seemed completely illusional. In an effort to bridge theory and practice, we refine recent ILP techniques to develop the fastest known approximation scheme for this problem. An implementation of this algorithm reaches values of ϵ lower than 2/11≈18.2% within a reasonable timespan. This is the approximation guarantee of MULTIFIT, which, to the best of our knowledge, has the best proven guarantee of any non-scheme algorithm.

Felix Hommelsheim (University of Bremen)

June 21, 2022 at 10:15 AM in room MZH 3150

Title Fault-Tolerant Edge-Disjoint s-t Paths – Beyond Uniform Faults

Abstract The Edge-disjoint s-t Paths Problem (s-t EDP) is a classical network design problem whose goal is to connect for some k ≥ 1 two given vertices of a graph under the condition that any k − 1 edges of the graph may fail. We extend the simple uniform failure model of the s-t EDP as follows: the edge set of the graph is partitioned into vulnerable, and safe edges, and a set of at most k vulnerable edges may fail, while safe edges do not fail. In particular we study the Fault-Tolerant Path (FTP) problem, the counterpart of the Shortest s-t Path problem in this non-uniform failure model as well as the Fault-Tolerant Flow (FTF) problem, the counterpart of s-t EDP. We present complexity results alongside exact and approximation algorithms for both problems. We emphasize the vast increase in complexity of the problems compared to s-t EDP.

Linda Cook (KAIST Korea)

June 10, 2022 at 09:00 AM virtually over Zoom

Title Structural graph theory and algorithms: Detecting a long even hole and other problems

Abstract We call a subgraph of a graph G induced if its vertices are a subset of the vertices of G and it contains any edge of G with both its endpoints in this subset. Forbidding certain induced subgraphs has deep implications which can be exploited to solve algorithmic questions on graphs. In this talk I will present some of my algorithmic results that use this approach.


I will first discuss detecting certain types of "holes" in a graph. We call an induced cycle of length a least four a hole and the parity of a hole is the parity of its length. In 1991, Bienstock showed it is NP-Hard to test whether a graph G has an even (or odd) hole a predetermined vertex of G. In 2002, Conforti, Cornuéjols, Kapoor and Vušković gave a polynomial-time algorithm to test whether a graph contains an even hole by applying their theorem about the structure of graphs that do not contain even holes. In 2003, Chudnovsky, Kawarabayashi and Seymour provided a simpler and slightly faster algorithm to test whether a graph contains an even hole. In 2019, Chudnovsky, Scott, Seymour and Spirkl provided a polynomial-time algorithm to test whether a graph contains an odd hole. Later that year, Chudnovsky, Scott and Seymour strengthened this result by providing a polynomial-time algorithm to test whether a graph contains an odd hole of length at least k for any fixed integer k ≥ 5. I will present a polynomial time algorithm to determine whether a graph has an even hole of length at least k for a given k ≥ 4 (joint work with Paul Seymour).


I will conclude the talk by discussing distributed graph algorithms. An outerplanar graph is a planar graph that can be embedded on the plane such that all vertices lie on the outer face. I will briefly present a tight approximation algorithm for the minimum dominating set of an outerplanar graph in a distributed setting called the "LOCAL model" (joint with Marthe Bonamy, Carla Groenland, Alexandra Wesolek). 

Antonios Antoniadis (University of Twente)

June 7, 2022 at 10:15 AM virtually over Zoom

Learning augmented algorithms via algorithm-switching

Abstract Learning augmented algorithms is an emerging area attempting to combine the effectiveness of machine learning approaches on practically relevant inputs with the performance guarantees provided by classical worst-case analysis of online algorithms. For many problems machine learning approaches can be readily applied to generate predictions about the structure of the input. An algorithm can then be designed to incorporate such predictions in its decision process and obtain improved performance as long as these predictions are sufficiently accurate. However, in some cases the predictions may be highly inaccurate and decision-making systems that are based on such predictors need to achieve a decent performance in that case too.

Several algorithmic results in the area can, at a high level, be interpreted as a meta-algorithm  for "combining" a number of different algorithms at runtime.  The individual algorithms usually trust the predictions to a different degree, ranging from blind trust to completely ignoring them. How these algorithms are combined generally depends on the particular problem at hand and how well each individual algorithm perfromed on the input encountered so far. In this talk, we present different variations of this approach from the literature, hi-light differences between these variations, and discuss what properties of problems and algorithms were crucial in the analysis of each particular use case. 

Alexandra Lassota (EPFL)

May 24, 2022 at 11:00 AM virtually over zoom.

Cardinality Constrained Scheduling in Online Models

Abstract Makespan minimization on parallel identical machines is a classical and intensively studied problem in scheduling, and a classic example for online algorithm analysis with Graham’s famous list scheduling algorithm dating back to the 1960s. In this problem, jobs arrive over a list and upon an arrival, the algorithm needs to assign the job to a machine. The goal is to minimize the makespan, that is, the maximum machine load. In this paper, we consider the variant with an additional cardinality constraint: The  algorithm may assign at most k jobs to each machine where k is part of the input. While the offline (strongly NP-hard) variant of cardinality constrained scheduling is well understood and an EPTAS exists here, no non-trivial results are known for the online variant. We fill this gap by making a comprehensive study of various different online models. First, we show that there is a constant competitive algorithm for the problem and further, present a lower bound of 2 on the competitive ratio of any online algorithm. Motivated by the lower bound, we consider a semi-online variant where upon arrival of a job of size p, we are allowed to migrate jobs of total size at most a constant times p. This constant is called the migration factor of the algorithm. Algorithms with small migration factors are a common approach to bridge the performance of online algorithms and offline algorithms. One can obtain algorithms with a constant migration factor by rounding the size of each incoming job and then applying an ordinal algorithm to the resulting rounded instance. With this in mind, we also consider the framework of ordinal algorithms and characterize the competitive ratio that can be achieved using the aforementioned approaches. More specifically, we show that in both cases, one can get a competitive ratio that is strictly lower than 2, which is the bound from the standard online setting. On the other hand, we prove that no PTAS is possible.

Torben Schürenberg

May 24, 2022 at 10:00 AM at MZH 3150

Games of competing patterns in random sequences: Applications of my master thesis in Algorithmic Game Theory

Abstract In this 25 to 30 minute talk we will look at various games in which competing patterns in a realization of a random sequence are waiting to appear first. The results  of my master thesis will allow us to calculate the winning probability of each  pattern. Based on these results, we will define a set of two-player games and examine exemplary representatives for stable states. This talk will provide an introduction to basic concepts of Algorithmic Game Theory such as the Nash equilibrium and the concept of a best response.

 

Kim-Manuel Klein (University of Bremen)

April 26, 2022 at 10:15 AM in room MZH 3150

Abstract Consider positive integral solutions x \in N^{n+1} to the equation a_0 x_0 + ... + a_n x_n = t. In the so called unbounded subset sum problem, the objective is to decide whether such a solution exists, whereas in the Frobenius problem, the objective is to compute the largest t such that there is no such solution.
We study the algorithmic complexity of the unbounded subset sum, the Frobenius problem and a generalization of both problems. More precisely, we study pseudo-polynomial time algorithms with a running time that depends on the smallest number a_0 or respectively the largest number a_n.

April 20, 2022 at 02:30 pm virtually over Zoom

Ivan Bliznets (HSE University)

Abstract: In this talk, we will discuss the connections between different algorithmic paradigms. It is well known that the existence of parameterized algorithms is equivalent to kernelization. In a recent work, Fomin et al. [JACM'19] established a remarkable connection between parameterized algorithms and exact exponential-time algorithms. We complement these results with connections between parameterized algorithms and approximation algorithms.


We discuss an interplay between approximation and kernelization for a large class of Graph Modification problems. We reveal the surprising similarity between these notions for H-free Edge Deletion/Completion/Editing problems.


We continue this line of research by establishing a connection between inapproximability and lower bounds for exact/parameterized algorithms. We use this connection to obtain essentially optimal lower bounds for Minimum Fill-in, Interval Completion, and Proper Interval Completion problems.

Rudy Zhou (CMU)

March 22, 2022 at 02:00 pm virtually over Zoom

Abstract This talk considers the basic problem of scheduling jobs online with preemption to maximize the number of jobs completed by their deadline on m identical machines. The main result is an O(1) competitive deterministic algorithm for any number of machines m > 1. This is joint work with Benjamin Moseley, Kirk Pruhs and Clifford Stein.

Anastasios Stefanou (University of Bremen)

March 1, 2022 at 11:00 AM virtually over Zoom

Abstract We devise a hierarchical representation of phylogenetic networks over a set of taxa X, called the cliquegram, which generalizes the dendrogram representation of phylogenetic tree ultrametrics. We show that cliquegrams form a lattice. Furthermore, we show that the cliquegram lattice model is a faithful model for solving the Phylogenetic Network Reconstruction Problem (PNRP): Given a set of trees there is a unique minimal network that contains them. Moreover, we develop a zigzag persistent homology theory for cliquegrams relative to a choice of a taxon x in X, called taxa-decorated persistent homology. Generalizations of these ideas (i.e. the cluttergram) from the setting of networks to the setting of filtrations are also developed. The algorithms implementing those constructions are discussed and their implementations are provided in Python. Finally, we examine our theory on certain benchmark biological datasets. This is joint work with Pawel Dlotko and Krzysztof Bartoszek.

 

Jens Schlöter (University of Bremen)

February 8, 2022 at 11:00 AM virtually over Zoom

Abstract Given a hypergraph with uncertain node weights following known probability distributions, we study the problem of querying as few nodes as possible until the identity of a node with minimum weight can be determined for each hyperedge. Querying a node has a cost and reveals the precise weight of the node, drawn from the given probability distribution. Using competitive analysis, we compare the expected query cost of an algorithm with the expected cost of an optimal query set for the given instance. For the general case, we give a polynomial-time f(α)-competitive algorithm, where f(α) ∈ [1.618+ε,2] depends on the approximation ratio α for an underlying vertex cover problem. We also show that no algorithm using a similar approach can be better than 1.5-competitive. Furthermore, we give polynomial-time 4/3-competitive algorithms for bipartite graphs with arbitrary query costs and for hypergraphs with a single hyperedge and uniform query costs, with matching lower bounds. Joint work with Evripidis Bampis, Christoph Dürr, Thomas Erlebach,  Murilo Santos de Lima and Nicole Megow.

Alexander Lindermayr (University of Bremen)

January 18, 2022 at 11:00 AM virtually over Zoom

Abstract We study the fundamental online k-server problem in a learning-augmented setting. While in the traditional online model, an algorithm has no information about the request sequence, we assume that there is given some advice (e.g.~machine-learned predictions) on an algorithm's decision. There is, however, no guarantee on the quality of the prediction and it might be far from being correct.
    
Our main result is a learning-augmented variation of the well-known Double Coverage algorithm for k-server on the line (Chrobak et al., SIDMA 1991) in which we integrate predictions as well as our trust into their quality. We give an error-dependent competitive ratio, which is a function of a user-defined confidence parameter, and which  interpolates smoothly between an optimal consistency, the performance in case that all predictions are correct, and the best-possible robustness regardless of the prediction quality. When given good predictions, we improve upon known lower bounds for online algorithms without advice.
We further show that our algorithm achieves for any k an almost optimal consistency-robustness tradeoff, within a class of deterministic algorithms respecting local and memoryless properties.

Our algorithm outperforms a previously proposed (more general) learning-augmented algorithm. It is remarkable that the previous algorithm crucially exploits memory, whereas our algorithm is memoryless.
Finally, we demonstrate in experiments the practicability and the superior performance of our algorithm on real-world data. Joint work with Nicole Megow and Bertrand Simon.

Michelle Hannekum (University of Bremen) 

December 20, 2021 at 11:00 AM virtually over Zoom (Bachelorkolloquium)

Pascal Bogumil (University of Bremen)

December 14, 2021 at 04:30 PM virtually over Zoom (Bachelorkolloquium)

Nicole Schirrmacher and Alexandre Vigny (University of Bremen)

December 14, 2021 at 10:15AM in room MZH 3150

Abstract: Connectivity under vertex failures is the problem of testing whether two vertices are still connected after the deletion of several given vertices. We will explain how to solve this problem very efficiently. Our method can solve many more problems all being expressible in a recently defined logic.
First-order logic (FO) can express many algorithmic problems on graphs, but fails to express whether two vertices are connected. We define a new logic (separator logic) by enriching FO with connectivity predicates connk(x, y, z1, . . . , zk) that hold true in a graph if there exists a path between x and y after the deletion of z1, . . . , zk.
In this talk we will first present a study of the expressive power of this new logic. We will then present algorithmic results for connectivity under vertex failures, and for the model-checking of separator logic.

 

Felix Hommelsheim (University of Bremen)

December 07, 2021 at 10:15AM in room MZH 3150

Abstract: Suppose we are given a bipartite graph that admits a perfect matching and an adversary may delete any edge from the graph with the intention of destroying all perfect matchings. We consider the task of adding a minimum cost edge-set to the graph, such that the adversary never wins. We show that this problem is equivalent to covering a digraph with non-trivial strongly connected components at minimal cost. We provide efficient exact and approximation algorithms for this task. In particular, for the unit-cost problem, we give a log_2 n-factor approximation algorithm and a polynomial-time algorithm for chordal-bipartite graphs. Furthermore, we give a fixed parameter algorithm for the problem parameterized by the treewidth of the input graph. For general non-negative weights we give tight upper and lower approximation bounds relative to the Directed Steiner Forest problem. Additionally we prove a dichotomy theorem characterizing minor-closed graph classes which allow for a polynomial-time algorithm. To obtain our results, we exploit a close relation to the classical Strong Connectivity Augmentation problem as well as directed Steiner problems.

Mario Grobler (University of Bremen)

November 23, 2021 at 10:15AM in room MZH 3150

Abstract: We study the connections between the notions of combinatorial discrepancy and graph degeneracy. In particular, we prove that the maximum discrepancy of the neighborhood set system of a subgraph of a graph $G$ is sandwiched between $\Omega(\log\deg(G))$ and $\Oof(\deg(G))$, where $\deg(G)$  denotes the degeneracy of $G$. We extend this result to inequalities relating weak coloring numbers and discrepancy of graph powers and deduce a characterization of bounded expansion classes.

Then, we switch to a model theoretical point of view, introduce pointer structures, and  study their relations to bounded expansion class. We  deduce that a monotone class has bounded expansion if and only if all the set systems definable in this class have bounded hereditary discrepancy.

Using known bounds on the VC-density of set systems definable in nowhere classes, we also give a characterization of nowhere dense classes in terms of discrepancy and even show the non-existence of a quantifier elimination scheme for these classes.

Elias Pitchmann (University of Bremen)

November 2, 2021 at 11:00AM in room MZH 3150

Abstract: The secretary problem is well known in the field of online algorithms and optimal stopping theory. Variants of the problem deal with multiple arrivals per candidate. In the model of my master thesis, candidates do not rearrive certainly on being declined but decide whether to come back or not based on a probabilistic parameter. We distinguish between an unbounded and a bounded number of arrivals per candidate.
Optimal online algorithms for these variants are characterized via dynamic programming. This technique provides an efficient method to calculate the exact behavior of these algorithms as well as their probabilities of success. The algorithms can be described notably concise based on the use of thresholds, which determine which candidates are desirable to accept at which point of the process. Similar observations hold when replacing the constant probability of rearrival by more general functions.

Jakob Hackstein (University of Bremen)

November 2, 2021 at 10:00AM virtually over Zoom (Bachelorkolloquium)

Jens Schlöter (University of Bremen)

October 19, 2021 at 10:00AM in room MZH 3150

Abstract: Exploring unknown environments is a fundamental task in many domains, e.g., robot navigation, network security, and internet search. We initiate the study of a learning-augmented variant of the classical, notoriously hard online graph exploration problem by adding access to machine-learned predictions. We propose an algorithm that naturally integrates predictions into the well-known Nearest Neighbor (NN) algorithm and significantly outperforms any known online algorithm if the prediction is of high accuracy while maintaining good guarantees when the prediction is of poor quality. We provide theoretical worst-case bounds that gracefully degrade with the prediction error, and we complement them by computational  experiments that confirm our results. Further, we extend our concept to a general framework to robustify algorithms. By interpolating carefully between a given algorithm and NN, we prove new performance bounds that leverage the individual good performance on particular inputs while establishing robustness to arbitrary inputs.


This is joint work with Franziska Eberle, Alexander Lindermayr, Nicole Megow and Lukas Nölke.

Daniel Schmand (University of Bremen)
Abstract: Inspired by the increasing popularity of Swiss-system tournaments in sports, we study the problem of predetermining the number of rounds that can be guaranteed in a Swiss-system tournament. For tournaments with n players and match sizes of k>=2 players, we prove that we can always guarantee n/(k(k-1)) rounds even if we schedule rounds based on results in previous rounds, under the constraint that no player faces the same opponent more than once. We show that this bound is tight. This provides a simple 1/k-approximation algorithm for the social golfer problem.

We extend the results to the Oberwolfach problem. Here, a match corresponds to a group of k players positioned in a cycle and the constraint is that no participant should meet the same neighbor more than once.

This is joint work with Marc Schröder and Laura Vargas Koch.

Daniel Schmidt genannt Waldschmidt (TU Berlin)

Abstract: We consider the stochastic scheduling problem of minimizing the expected
makespan on m parallel identical machines. While the (adaptive) list
scheduling policy achieves an approximation ratio of 2, any
(non-adaptive) fixed assignment policy has performance guarantee
\Omega(log m/loglog m). Although the performance of the latter class of
policies are worse, there are applications, e.g. in surgery scheduling,
in which rather non-adaptive policies are desired.

We introduce two classes of policies, namely \delta-delay and \tau-shift
policies, whose degree of adaptivity can be controlled by a parameter.
Using the fixed assignment policy induced by LEPT as a basis, we present
a policy -  belonging to both classes - for which we show that it is an
O(loglog m)-approximation for reasonably bounded parameters. In other
words, an improvement on the performance of any fixed assignment policy
can be achieved when allowing a small degree of adaptivity. Moreover, we
provide a matching lower bound for any \delta-delay and \tau-shift
policy when both parameters, respectively, are in the order of the
expected makespan of an optimal non-anticipatory policy.

This is joint work with Guillaume Sagnol.

Jens Schlöter (University of Bremen)

Abstract: We study how to utilize (possibly machine-learned) predictions in a model for optimization under uncertainty that allows an algorithm to query unknown data. The goal is to minimize the number of queries needed to solve the problem. Considering fundamental problems such as finding the minima of intersecting sets of elements or sorting them, as well as the minimum spanning tree problem, we discuss different measures for the prediction accuracy and design algorithms with performance guarantees that improve with the accuracy of predictions and that are robust with respect to very poor prediction quality. We also provide new structural insights for the minimum spanning tree problem that might be useful in the context of explorable uncertainty regardless of predictions. Our results prove that untrusted predictions can circumvent known lower bounds in the model of explorable uncertainty. We complement our results by experiments that empirically confirm the performance of our algorithms.

This is joint work with Thomas Erlebach, Michael Hoffmann, Murilo S. de Lima and Nicole Megow.

I present the results of my Bachelor thesis that is based on Lucier et al. (2013). I consider an online scheduling model with hard deadlines, slackness and jobs with different values. For this model, I present an algorithm and bound its competitive ratio. This competitive analysis is the main part of the presentation. Compared to Lucier et al. (2013) our contribution is that our analysis is more detailed and therefore, fills some gaps.

Machine-learned predictors, although achieving very good results for inputs resembling training data, cannot possibly provide perfect predictions in all situations. Still, decision-making systems that are based on such predictors need not only to benefit from good predictions but also to achieve a decent performance when the predictions are inadequate. In this paper, we propose a prediction setup for arbitrary metrical task systems (MTS) (e.g., caching, k-server and convex body chasing) and online matching on the line. We utilize results from the theory of online algorithms to show how to make the setup robust. Specifically for caching, we present an algorithm whose performance, as a function of the prediction error, is exponentially better than what is achievable for general MTS. Finally, we present an empirical evaluation of our methods on real world datasets, which suggests practicality.

Laura Vargas Koch (RWTH Aachen)

Abstract: Modeling traffic in road networks is a widely studied but challenging problem, especially under the assumption that drivers act selfishly. A common approach is the deterministic queuing model. The basic idea is to model traffic by a continuous flow that travels over time through a network, in which the arcs are endowed with transit times and capacities. Whenever the flow rate exceeds the capacity the flow particles build up a queue. So far it was not possible to represent spillback and kinematic waves in this model. By introducing a storage capacity arcs can become full, and thus, might block preceding arcs, i.e., spillback occurs. Moreover we add a gap flow to represent kinematic waves. Visually, this means that on a full road cars can only drive when there is a gap in front and theses gaps travel backwards over the road.  We carry over the main results of the original model to our generalization, i.e., we characterize dynamic equilibria by sequences of particular static flows, so-called spillback thin flows. This is joint work with Leon Sering.

Alexander Lindermeyr (University of Bremen)

Abstract: In 2019, Garg et al. proposed a novel algorithm for
non-clairvoyant precedence constrained scheduling with weighted flow and
completion time objectives. We will look at their round-robin-like
algorithm, which is almost identical for both objectives, and
particularly at their dual-fitting-based analysis.

What does it mean for a graph to almost be planar? Or to almost have bounded degree? On some simple graphs classes, some difficult algorithmic problems become tractable. Ideally, one would like to use (or adapt) existing algorithms for graphs that are "almost" in a tractable class.

In this talk, I will discuss the notion of elimination distance to a class C, a notion introduced by Bulian and Dawar (2016). The goals of the talk are:
1) Define this notion, and discuss why it is relevant by presenting some existing results.
2) Show that we can compute the elimination distance of a given graph to the class of graph of degree at most d (in some cases).
I.e. answer the question: "Is this graph close to a graph of bounded degree?"

The second part is the result of a collaboration with Alexandre Lindermayer and Sebastian Siebertz.

Consider a matching problem with two sets of participants each of whom ranks the participants of the other set in order of their individual preferences. Using the deferred acceptance algorithm by Gale and Shapley (1962) both of the one-sided rank-maximal stable matching can be obtained from the instance. The stability is an essential property in the setting of matchings under preferences and often considered in the sense of fairness. Meanwhile, a matching is called rank-maximal if it maximizes the number of participants assigned to their first choice and, subject to that their second choice, and so on. 

However, in some applications two-sided rank-maximality is pursued which the deferred acceptance algorithm cannot provide. In this talk I present the algorithm introduced in the paper of Cooper and Manlove (2020) based on the work of Irving, Leather and Gusfield (1987) that starts with the extended Gale-Shapley algorithm and terminates in an egalitarian (i.e. two-sided) rank-maximal stable matching.

Parikh automata on finite words (PA) were first introduced by Klaedtke and Rueß. This extension of finite automata processes sequences of symbols that are labeled with vectors of natural numbers. Such a word is accepted if there is an accepting run in the automaton and the sum of the vectors lies in a given semi-linear set. In the first part, we study PA where the size of the vectors is bounded and compare the respective classes of languages to the Chomsky hierarchy. In the second part, we introduce Parikh automata on infinite words. We present two variants and characterize the respective classes of languages similar to Büchi's theorem. We conclude by giving an overview of their closure properties and common decision problems.

We consider the following online problem of min-cost perfect matching with concave delays: the input is a set of requests arriving online. In the mono-chromatic setting, any two requests can be matched into a pair; in the bi-chromatic setting, each request has a polarity (either positive or negative) and only requests of different polarities can be matched into a pair. At each point in time, the algorithm must choose between matching a pair of requests or delaying them to be matched at a later time. Let D denote a concave delay function. For any request r, suppose it arrives at time a(r) and is matched with another request at time m(r). Then a cost of D(m(r) – a(r)) is charged due to delaying this request for a duration of m(r) – a(r). The target is to minimize the total delay cost produced by the algorithm, i.e., \sum_{r} D(m(r) – a(r)). In this talk, we will introduce O(1)-competitive online algorithms for both the mono-chromatic and bi-chromatic settings.

Joint work with Yossi Azar and Danny Vainstein, accepted for SODA 2021.

My current research mainly focuses on the parallel-batch machines with pulse interruptions (PI). PI derives from the availability constraints. The availability constraints assume that there is an unavailable interval. We study the unavailable interval, the length of which is tend to 0. We name this kind of unavailable interval as a pulse interruption. In fact, the pulse interruptions divide the scheduling period into several time intervals, we aim at how to schedule jobs as batches into these time intervals. And now we consider two problem: the one is that online scheduling on a single parallel-batch machine with a pulse interruption; the other one is that online batching scheduling on parallel machines with periodic pulse interruption.

Online optimization captures relevant real-world problems, where the input is unknown in advance and arrives over time. Therefore, classical online algorithms make no assumptions on the future, which gives their analysis a pessimistic flavor. Indeed, in many real-world applications we can make statements about the future using modern machine-learning approaches to predict it. In order to still achieve performance guarantees in such a setting, Lykouris and Vassilvitskii recently introduced a framework which measures the efficiency of such algorithms by analyzing them with respect to the quality of the given prediction, while making no assumptions on the generation of these.

We apply this framework to the k-server problem on the line, which is a well-studied and fundamental online problem. There exists a simple, deterministic, k-competitive online algorithm, which is optimal. We propose a family of learning-augmented algorithms for this problem and bound their competitive ratio with respect to the prediction's quality, such that they break the lower bound of k given good predictions. For some algorithms of the family we also prove a general performance guarantee, independently of the prediction.

For the special case of two servers, we show that this family is in a certain sense optimal among all learning-augmented algorithms for the problem.
We also state a modification of the algorithms for tree metrics and achieve almost the same results as on the line.
For the 2-taxi problem, which was recently introduced by Coester and Koutsoupias and generalizes the 2-server problem, we present a similar approach using machine-learned predictions, and again circumvent existing lower bounds.
Finally, we empirically evaluate the proposed algorithms and illustrate their applicability in practice.

This thesis shows how online job scheduling algorithms can be improved using predictions about the processing times of jobs. We will take a closer look at the existing approach of combining the algorithms Round-Robin and Shortest Job First to improve their competitive ratio and transfer the concept to the weighted case. We establish a bound for the competitive ratio and show that the proposed algorithm performs as expected in practice by analyzing its performance across different instances in simulations. 

Andreas Wiese (Universidad de Chile)
Independent set is a fundamental problem in combinatorial optimization. While in general graphs the problem is essentially inapproximable, for many important graph classes there are approximation algorithms known in the offline setting. These graph classes include interval graphs and geometric intersection graphs, where vertices correspond to intervals/geometric objects and an edge indicates that the two corresponding objects intersect. We present the first \dynamic approximation algorithms for independent set of intervals and geometric objects. They work in the fully dynamic model where in each update an interval/geometric object is inserted or deleted. Our algorithms are deterministic and have worst-case update times that are polylogarithmic for constant d and epsilon. We achieve the following approximation ratios:
- For independent set of intervals, we maintain (1+ϵ)-approximate solutions for the unweighted and the weighted case.
- For independent set of d-dimensional hypercubes we maintain (1+ϵ)2d-approximate solutions in the unweighted case and O(2d)-approximate solutions in the weighted case. Also, we show that for maintaining unweighted (1+ϵ)-approximate solutions one needs polynomial update time for d≥2 if the ETH holds.
- For weighted d-dimensional hyperrectangles we present a dynamic algorithm with approximation ratio (1+ϵ)log^{d−1}N, assuming that the coordinates of all input hyperrectangles are in [0,N]d and each of their edges has length at least 1.
joint work with Monika Henzinger and Stefan Neumann

Giorgio Lucarelli (Grenoble INP)
A well-identified issue in algorithms and, in particular, in online computation is the weakness of the worst case paradigm. Several more accurate models have been proposed in the direction of eliminating this weakness. Resource augmentation is one of these models which has been successfully used for providing theoretical evidence for several heuristics in scheduling with good performance in practice. According to it, the algorithm is applied to a more powerful environment than that of the adversary. Several types of resource augmentation for scheduling problems have been proposed up to now, including speed augmentation, machine augmentation and more recently rejection. In this context, we propose algorithms for online scheduling problems for minimizing variations of the total weighted flow time objective based on mathematical programming and a primal-dual scheme.

Franziska Eberle (University of Bremen)
We study a fundamental online job admission problem where jobs with processing times and deadlines arrive online over time at their release dates, and the task is to determine a preemptive single-server schedule which maximizes the number of jobs that complete on time. To circumvent known impossibility results, we make a standard slackness assumption by which the feasible time window for scheduling a job is at least (1+epsilon) times its processing time, for some epsilon>0. We consider a variant of the online scheduling problem where the provider has to satisfy certain commitment requirements. These requirements arise, for example, in modern cloud-services, where customers do not want last-minute rejections of critical tasks and request an early-enough provider-side commitment to completing admitted jobs.Our main contribution is an optimal algorithm for online job admission with commitment. When a provider must commit upon starting a job, our bound is O(1/epsilon). This is best possible as there is a lower bound of Omega(1/epsilon) for online admission even without commitment. If the commitment decisions must be made before a job's slack becomes less than a delta-fraction of its size, we prove a competitive ratio of O(epsilon/((epsilon - delta) delta)) for 0<delta<epsilon. This result interpolates between commitment upon starting a job and commitment upon arrival. For the latter commitment model, it is known that no (randomized) online algorithm does admit any bounded competitive ratio.

Veerle Tan-Timmermans (RWTH Aachen University)
Given two matroids and on a common ground set, some integer k, and two cost functions on the ground set of resources, we consider the optimization problem to find a basis X in matroid 1 and Y in matroid 2 minimizing the aggregated cost of resources in bases X and Y, subject to either a lower bound, upper bound or equality constraint on the size of the intersection of the two bases X and Y. The problem with lower bound constraint turns out to be a generalization of the Recoverable Robust Matroid problem under interval uncertainty representation for which the question for a strongly polynomial-time algorithm was left as an open question [Hradovich, 2017]. We show that the two problems with lower and upper bound constraints on the size of the intersection can be reduced to weighted matroid intersection, and thus be solved with a strongly polynomial-time primal-dual algorithm. The question whether the problem with equality constraint can also be solved efficiently turned out to be a lot harder. As our main result, we present a strongly polynomial, primal-dual algorithm for the problem with equality constraint on the size of the intersection. Additionally, we discuss generalizations of the problems from matroids to polymatroids, and from two to three or more matroids.

Marc Goerigk (University of Siegen)
In this talk I focus on combinatorial optimization problems under uncertainty. In the most basic (robust) setting, there is a set of cost vectors given, and the question is to find one solution such that the largest of the possible objective functions is as small as possible. A variant of this setting is two-stage robustness: Here, we only need to make a partial decision in the first stage, under known first-stage costs. Then the uncertain second-stage cost vector is revealed, and we can complete our partial solution to a full solution. While one-stage robust combinatorial problems have been studied quite thoroughly over the past two decades, progress on two-stage problems has been only recent. I give an accessible overview to those who are new to robust optimization, and will also present some of these recent results to highlight where the current research is at.

Sebastian Siebertz (University of Bremen)
The notion of nowhere denseness was introduced by Nešetřil and Ossona de Mendez and provides a robust concept of uniform sparseness of graph classes. Nowhere dense graph classes generalize many familiar classes of sparse graphs such as the class of planar graphs and classes that exclude a fixed minor or topological minor. They admit several seemingly unrelated natural characterisations that lead to strong algorithmic applications. In particular, the model-checking problem for first-order logic is fixed-parameter tractable over these classes. In this tutorial I will give an introduction to the theory of nowhere dense graph classes with a focus on different characterisations and algorithmic applications.

Lukas Nölke (University of Bremen)
In the online matching problem on the line, n requests appear online and have to be matched, irrevocably and upon arrival, to a given set of servers, all on the real line. The goal is to minimize the sum of distances from the requests to their respective servers. Despite all research efforts, it remains as an intriguing open problem whether there exists a constant-competitive online algorithm. The best known online algorithm, due to Raghvendra (SOCG 2018), achieves a competitive factor of O(logn). There exists a matching a lower bound of Ω(logn) (Antoniadis et al., LATIN 2018) that holds for a quite large class of online algorithms, including all known deterministic algorithms in the literature. The best known general lower bound is 9+ε (Fuchs et al., TCS 2005). In this work, we consider the increasingly popular online recourse model and show that a constant competitive factor is possible if we allow to revoke online decisions to some extent. More precisely, we derive an O(1)-competitive algorithm for online minimum matching on the line that rematches each of the n requests at most O(logn) times. For special instances, where no more than one request appears between two servers, we obtain a substantially better result. We give a (1+ε)-competitive algorithm with Oε(1) recourse actions per request. Incidentally, the construction of the aforementioned Ω(logn) lower bound uses such instances. This is joint work with Nicole Megow.

Franziska Eberle (University of Bremen)
We consider a model for interval scheduling with a rechargeable resource. A set of interval jobs with starting times and end times needs to be scheduled on a set of identical machines. The goal is to maximize the total weight of the scheduled jobs. In the beginning, each machine is given an identical amount of a rechargeable resource. During the processing of a job a job-dependent amount of this resource is used. While no job is processed on a machine, its resource can charge with a linear rate. The resource of a machine is neither allowed to exceed the maximal capacity nor to become negative. This model generalizes scheduling with a (non-)renewable resource as the linear charging rate allows for trade-off between these two extremes. A typical application arises in the context of station based car sharing: scheduling bookings on e-cars with a rechargeable battery. For single machine scheduling, we show an optimal polynomial-time algorithm without weights and give a FPTAS in the presence of weights. The multiple machine case turns out to be considerably harder as the unit weight case already is NP-hard for a constant number of machines. Based on the optimal single-machine algorithm, we develop an e/(e-1)-approximation if the number of machines is part of the input.

Nicole Megow (University of Bremen)
Explorable uncertainty refers to settings where parts of the input data are initially unknown, but can be obtained at a certain cost using queries. An algorithm can make queries one by one until it has obtained sufficient information to solve a given problem. The challenge lies in balancing the cost for querying and the impact on the solution quality. In this talk, I give a short overview on recent work on explorable uncertainty for combinatorial optimization problems, particularly. I will speak about network design (minimum spanning tree) and scheduling.

Miriam Schlöter (ETH Zurich)
Lattice-free gradient polyhedra are optimality certificates for mixed integer convex optimization problems. We consider how to construct these polyhedra for problems with two integer variables. A classic result of Bell, Doignon, and Scarf implies that a lattice-free gradient polyhedron exists with at most four facets. We show how to build a sequence of (not necessarily lattice-free) gradient polyhedra, each of which has at most four facets, that converges in finitely many steps to lattice-free. This construction gives a gradient descent type algorithm for problems with two integer variables. We also show that approximate gradient information is enough to guarantee convergence for strongly convex functions.

Jens Schlöter (University of Bremen)
Time-critical tasks that need to be scheduled and executed are often modeled as directed acyclic graphs (DAGs) in order to describe precedence constraints and potential parallelization between their subtasks. However, when modeling real-world applications, the execution of a subtask might depend on the evaluation of a condition, like for example in if-then-else-statements of computer programs. Thus, only a subset of the subtasks will actually be executed. In order to model such control flow instructions, conditional DAGs can be used that extend the original DAG model by allowing the usage of so-called conditional nodes, where the semantics of a conditional node is that exactly one of its successors will be processed. Using conditional DAGs leads to additional challenges regarding schedulability tests and the calculation of worst-case execution times. Even if the scheduling algorithm that will be used to schedule the task is fixed, its execution time still depends on which successors of the conditional nodes will be chosen to be processed. In this talk, we will consider the case where a conditional DAG task and the scheduling algorithm to be used - in form of a priority order over all subtasks that potentially will be executed - are given, and the goal is to compute the worst-case execution time. Therefore, we will present complexity results as well as approximations for the described problem.

Victor Verdugo (London School of Economics)
In many situations finding the optimal revenue pricing policy requires to solve a hard optimisation problem. Posted price mechanism are simple and efficiently implementable. In this talk I'll show the connection between this type of mechanisms and optimal stopping rules for online selection problems, and how the guarantees from one problem to the other are preserved.

Ulrich Pferschy (University of Graz)
We consider the 0-1 Incremental Knapsack Problem (IKP) where the knapsack capacity grows over time. For each item one has to decide in which time period it should be packed, or whether it remains unpacked. Once an item is placed in the knapsack in a certain time period, it cannot be removed afterwards. The contribution of a packed item in each time period depends on its profit as well as on a time factor which reflects the importance of the period in the objective function. The problem calls for maximizing the weighted sum of the profits over the whole time horizon. In this work, we provide approximation results for IKP and its restricted variants. In some results, we rely on Linear Programming to derive approximation bounds. We also devise a Polynomial Time Approximation Scheme (PTAS) when the number of periods is considered as a constant. Under the mild and natural assumption that each item can be packed in the first time period, we discuss different approximation algorithms suited for any number of time periods and for the special case with two and three periods. Joint work with Federico Della Croce and Rosario Scatamacchia.

Jens Schlöter (University of Bremen)
Tasks that need to be scheduled and executed are often modeled as directed acyclic graphs (DAGs) in order to describe precedence constraints and potential parallelization between their subtasks. However, when modeling real-world tasks, the execution of a subtask might depend on the evaluation of a condition, like for example in if-then-else-statements of computer programs, and thus, only a subset of the subtasks will actually be executed. In order to model such control flow instructions, conditional DAGs can be used, that extend the original DAG model by allowing the usage of so-called conditional nodes, where the semantics of a conditional node is, that exactly one of its successors will be processed. Using conditional DAGs leads to additional challenges regarding schedulability tests and the calculation of the worst-case execution time of a modeled task. Even if the scheduling algorithm, that will be used to schedule the task, is fixed, its execution time still depends on which successors of the conditional nodes will be chosen to be processed. In this talk, we will consider the case, where a conditional DAG task and the scheduling algorithm to be used -- in form of a priority order of all subtasks that potentially will be executed -- are given, and the goal is to calculate the worst-case execution time. Therefore, we will present complexity results as well as approximations for the described problem.

Ruben Hoeksma (University of Bremen)
We consider a non-atomic network congestion game with incomplete information in which the incomplete information comes from randomness in the demand. We model this as a multi-commodity flow, where for each commodity it is uncertain if it will actually appear in the network. Wang, Doan, and Chen (2014), by considering an equilibrium notion in which users evaluate their expected cost using the full knowledge of the demand distribution, concluded that the price of anarchy of the game can be arbitrarily large. We consider the problem to be a Bayesian game, where users evaluate their cost conditioned on the fact that they themselves are present in the network. In contrast to the conclusion by Wang, Doan, and Chen (2014), we find that the known bounds on the price of anarchy for the deterministic demand game also apply to the Bayesian game, even if the probability of traveling for different commodities is arbitrarily correlated. Specifically, if all latency functions in the network are affine, the price of anarchy for the Bayesian game is 4/3. We also generalize these results to the class of smooth games. This talk is based on work done in collaboration with José Correa and Marc Schröder.

Łukasz Jeż (University of Wrocław)
In the online packet scheduling problem with deadlines (PacketScheduling, for short), the goal is to schedule transmissions of packets that arrive over time in a network switch and need to be sent across a link. Each packet p has a deadline dp, representing its urgency, and a non-negative weight wp, that represents its priority. Only one packet can be transmitted in any time slot, so, if the system is overloaded, some packets will inevitably miss their deadlines and be dropped. In this scenario, the natural objective is to compute a transmission schedule that maximizes the total weight of packets which are successfully transmitted. The problem is inherently online, with the scheduling decisions made without the knowledge of future packet arrivals. The central problem concerning PacketScheduling, that has been a subject of intensive study since 2001, is to determine the optimal competitive ratio of online algorithms, namely the worst-case ratio between the optimum total weight of a schedule (computed by an offline algorithm) and the weight of a schedule computed by a (deterministic) online algorithm. We solve this open problem by presenting a ϕ-competitive online algorithm for PacketScheduling (where ϕ≈1.618 is the golden ratio), matching the previously established lower bound.

Bertrand Simon (University of Bremen)
Scientific applications are usually described as directed acyclic graphs, where nodes represent tasks and edges represent dependencies between tasks. For some applications, such as the multifrontal method of sparse matrix factorization, this graph is a tree: each task produces a single output data, used by a single task (its parent in the tree). We focus on the case when the data manipulated by tasks have a large size, which is especially the case in the multifrontal method. To process a task, both its inputs and its output must fit in the main memory. Moreover, output results of tasks have to be stored between their production and their use by the parent task. It may therefore happen, during an execution, that not all data fit together in memory. In particular, this is the case if the total available memory is smaller than the minimum memory required to process the whole tree. In such a case, some data have to be temporarily written to disk and read afterwards. These Input/Output (I/O) operations are very expensive; hence, the need to minimize them. We revisit this open problem in this paper. Specifically, our goal is to minimize the total volume of I/O while processing a given task tree. We first formalize and generalize known results, then prove that existing solutions can be arbitrarily worse than optimal. Finally, we propose a novel heuristic algorithm, based on the optimal tree traversal for memory minimization. We demonstrate good performance of this new heuristic through simulations on both synthetic trees and realistic trees built from actual sparse matrices.

Claus Gwiggner (Universität Hamburg)
Scheduling with uncertain release dates has received little attention in the literature, although transportation, production and supply chain problems often exhibit such properties. In this paper, a sequence that minimizes the expected total weighted completion time under uncertain release dates is determined. The approach is a two-stage model, where an a priori sequence is established in the first stage and a scheduling operation is performed in the second stage. The major difficulty is an exponential number of the joint realization of the release dates. This difficulty is resolved by a Markov model of the optimal solution of the second stage. It allows to solve instances to optimality with millions of scenarios. Moreover, the optimal policy turns out to shift a single task at the beginning of a sequence that is built according to the Weighted Shortest Expected Processing Times rule, depending on the variance of the release dates.

Thomas Weise (Hefei University )
In the fields of heuristic optimization and machine learning, experimentation is the way to assess the performance of an algorithm setup and the hardness of problems. Good experimentation is complicated. Most algorithms in the domain are anytime algorithms, meaning they can improve their approximation quality over time. This means that one algorithm may initially perform better than another one but converge to worse solutions in the end. Instead of single final results, the whole runtime behavior of algorithms needs to be compared (and runtime may be measured in multiple ways). We do not just want to know which algorithm performs best and which problem is the hardest ― a researcher wants to know why. We introduce a process which can 1) automatically model the progress of algorithm setups on different problem instances based on data collected in experiments, 2) use these models to discover clusters of algorithm (or problem instance) behaviors, and 3) propose reasons why a certain algorithm setup (or problem instance) belongs to a certain algorithm (or problem instance) behavior cluster. These high-level conclusions are presented in form of decision trees relating algorithm parameters (or instance features) to cluster ids. We emphasize the duality of analyzing algorithm setups and problem instances. Our process is implemented as open source software and tested in two case studies, on the Maximum Satisfiability Problem and the Traveling Salesman Problem. Besides its basic application to raw experimental data, yielding clusters and explanations of “quantitative” algorithm behavior, our process also allows for “qualitative” conclusions by feeding it with data which is normalized with problem features or algorithm parameters. It can also be applied recursively, e.g., to further investigate the behavior of the algorithms in the cluster with the best-performing setups on the problem instances belonging to the cluster of hardest instances. Both use cases are investigated in the case studies.

Leen Stougie (CWI Amsterdam)
In the last years the vertex enumeration problem of polyhedra has seen a revival in the study of metabolic networks, which increased the demand for efficient vertex enumeration algorithms for high-dimensional polyhedra given by equalities and inequalities. The complexity of enumeration of vertices of polytopes (bounded polyhedral) is a famous open problem in discrete and computational geometry. In this lecture we do not solve this problem, but present a type of fixed parameter total polynomial time result. We apply the concept of branch-decomposition to the vertex enumeration problem of polyhedra P={x:Ax=b;x≥0}. For that purpose, we introduce the concept of k-module and show how it relates to the separators of the linear matroid generated by the columns of A. This then translates structural properties of the matroidal branch-decomposition to the context of polyhedra. We then use this to present a total polynomial time algorithm for enumerating all vertices of polytopes P for which the branch-width of the linear matroid generated by the columns of A is bounded by a constant k. If time permits we will show a truly fixed parameter tractable result, with time being an exponential function of the parameters only times a polynomial function of the input/output size, if on top of bounded branchwidth we also assume that all square submatrices of A have bounded determinant. This paper is joint work with Arne Reimers.

Klaus Heeger (University of Bonn)
We consider the 2-vertex-connected spanning subgraph problem. This problem asks, given a 2-vertex-connected graph, to find a 2-vertex-connected spanning subgraph with minimum number of edges. This classical network optimization problem is known to be APX-hard. We give a 10/7-approximation algorithm for this problem, improving on the 3/2-approximation from Garg, Vempala and Singla. The algorithm is similar to the 17/12-approximation algorithm for the related 2-egde-connected spanning subgraph problem, and works by carefully modifying an open ear-decomposition with minimum number of even ears.

Franziska Eberle (University of Bremen)
We study a fundamental online job admission problem where jobs with processing times and deadlines arrive online over time at their release dates, and the task is to determine a preemptive single-server schedule which maximizes the number of completed jobs. To circumvent known impossibility results, we make a standard slackness assumption by which the feasible time window for scheduling a job is at least 1+ε times its processing time, for some ε>0. We analyze the competitive ratio of algorithms depending on the slackness parameter ε. In this paper we quantify the impact that different provider commitment requirements have on the performance of online algorithms. Studying this impact is important, for example, in modern cloud-services, where customers do not want last-minute rejections of critical tasks and request an early-enough provider-side commitment to completing admitted jobs. Our main contribution is one universal algorithmic framework for online job admission both with and without commitments. For scheduling without commitment, we prove that our algorithm with a competitive ratio of O(1ε) is the best possible (deterministic) for this problem. For different commitment models, we give the first non-trivial performance bounds. In the setting where commitment decisions must be made before a job's slack becomes less than a δ-fraction of its size, we prove a competitive ratio of O(ε⋅((ε−δ)δ2)−1), for 0<δ<ε. When a provider must commit upon starting a job, our bound is O(ε−2). A stronger commitment model, where a provider has to commit at job arrival, does not admit any bounded competitive ratio even for randomized algorithms. Finally, we observe that for scheduling with commitment the restriction to the unweighted'' throughput model is essential; if jobs have individual weights, we rule out any bounded competitive ratio (for deterministic algorithms).

Adrian Fessel (University of Bremen)
Network topology, as well as dynamic processes such as flow, synchronization of oscillators, or wave activity that take place on a given network topology, have been extensively studied. In self-organizing networks, topology and dynamic processes interact in a unique way: the network adapts its structure based on local activity, enabling the emergence of global dynamic properties on an optimized topology. Working with Physarum polycephalum as an experimentally accessible model for an adaptive transportation system, we seek to formalize topological development into a sequence of discrete events intended to serve as basis for studying the interaction of flow and topology. We focus on reorganization of P. polycephalum networks after fragmentation, a process occurring in a prominent percolation transition. Unlike in most traditional percolation processes, system size in P. polycephalum is not fixed and requires a model that allows for dynamical changes in size. The theoretical description follows a master equation with parameters obtained from statistical analysis of experimental data. Computational investigation of the model recreates the dynamics of the topological phase transition and enables characterization of finite size effects and critical exponents. Comparing the influences of system growth and fusion within the system reveals a significant shift of the transition when system growth dominates.

Sigrid Knust (Osnabrück University)
A synchronous flow shop is a variant of a non-preemptive permutation flow shop where transfers of jobs from one machine to the next take place at the same time. The processing is organized in synchronized cycles which means that in a cycle all current jobs start at the same time on the corresponding machines. Then all jobs are processed and have to wait until the last one is finished. Afterwards, all jobs are moved to the next machine simultaneously. As a consequence, the processing time of a cycle is determined by the maximum processing time of the operations contained in it. Furthermore, only permutation schedules are feasible, i.e., the jobs have to be processed in the same order on all machines. The goal is to find a permutation of the jobs such that the makespan is minimized. Motivated by a practical application, we investigate special cases where the processing times of the cycles are determined by a subset of so-called dominating machines. Furthermore, we consider problems with additional resources and setup times.

Bertrand Simon (ENS de Lyon)
With the increasing complexity of modern computing platforms, many scientific applications now rely on runtime schedulers. Most of these frameworks model an application as a directed acyclic graph of tasks, where nodes represent tasks and edges represent dependences between tasks. In my thesis, I focused on three theoretical challenges concerning task graph scheduling: exploiting the inherent parallelism of tasks, efficiently using accelerators, and coping with a limited memory. In this talk, I will detail two projects lead in this context. The first one targets a platform composed of two types of processors, such as CPUs and GPUs. The processing time of each task depends on the type of processors it is allocated to; this decision being one the main scheduling choices. We study the makespan minimization problem in an online setting, where the graph is gradually discovered, which increases the difficulty of the decision. In a second project, we focus on reducing the memory consumption of an application in order to avoid expensive memory transfers. Specifically, we aim at preventing dynamic schedulers from running out of memory on a shared-memory platform. These schedulers generally start multiple tasks in parallel, which may lead to a memory shortage. We study how to modify the task graph in order to restrict this parallelism while maintaining performance.

Viktor Bindewald (TU Dortmund)
The Bipartite Matching Problem is a well-studied combinatorial optimization problem with numerous practical applications. In real-life optimization problems are modeled using data based on forecasts and estimates which leads to uncertain problem parameters. One way to take this fact into account is provided by the framework of robust optimization. While most robust variants of the Bipartite Matching Problem considered in the literature address cost uncertainty, this talk deals with a new robust problem, where the structure of the underlying bipartite graph is uncertain. In our problem, an additional part of the input is a list of failure scenarios described by a subset of edges of the graph that may fail simultaneously. If a scenario emerges, the corresponding edges are deleted from the graph. In the robust problem the goal is to find a minimum-cost set of edges containing a perfect matching in every scenario, i.e., a robust solution survives any possible incident. We will discuss the computational complexity and present asymptotically tight approximation algorithms. Additionally we address the following related question: Is it easier to harden an existent infrastructure rather than designing a new one from scratch? In our setting this results in the question how to robustify a given perfect matching by adding new edges to the ambient graph. For this problem we will present hardness and algorithmic results.

Franziska Eberle (University of Bremen)
We address a fundamental stochastic scheduling problem where jobs with uncertain processing times must be scheduled non-preemptively on m identical parallel machines. We are given a set J of n jobs, where each job j∈J is modeled by a random variable Pj with known distribution. The actual realization of a processing time becomes known only by executing a job. The goal is to find a policy Π that decides for any point in time which jobs to schedule such as to minimize the expected total completion time, ∑j∈JE[CΠj], where CjΠ denotes the completion time of job j under policy Π. The scheduling problem can be stated in the standard three-field notation as P||E[∑jCj]. We consider so-called index policies that assign the same priority to jobs with the same probability distribution and schedule jobs one after the other on the first machine that becomes available. Job-based index policies do not consider the number of jobs or the number of machines. We rule out distribution-independent approximation factors for job-based index policies. Somewhat surprisingly this lower bound is obtained already for very simple instances with only two types of jobs, identical deterministic jobs and a set of stochastic jobs that all follow the same Bernoulli distribution. For this class of instances we also give a policy that is an O(m)-approximation. This talk is based on work together with Felix Fischer, Nicole Megow, and Jannik Matuschke.

Katrin Casel (Trier University)
For most clustering problems, the quality of a solution is usually assessed with respect to a given pairwise distance on the input objects. Approximate solutions for such tasks often rely on this distance to be a metric. But what happens if this property does not hold? Especially for computing a clustering such that each cluster has a certain minimum cardinality k>2 (as required for anonymisation or balanced facility location problems) this becomes problematic. For example, with the objectives to minimise the maximum radius or diameter of all clusters, there exists no polynomial-time constant factor approximation if the triangle inequality is violated while there exists a 2-approximation for metric distances. This talk introduces methods to resolve or at least soften this effect of non-metric distances by devising particular strategies to deal with violations of the triangle inequality (conflicts). With the use of parameterised algorithmics it turns out that, if the number of such conflicts is not too large, constant factor approximations can still be computed efficiently.

Ruben Hoeksma (University of Bremen)
TSP with neighborhoods is a long-studied problem where we are asked to find the shortest tour visiting a given set of regions (=neighborhoods). When the neighborhoods are lines in two dimensions, the problem is efficiently solvable. For lines in three dimensions, the problem is NP-hard and this is generally true in higher dimensions with regions that are low dimensional subspaces. The question we try to answer in this talk concerns the setting where the regions are hyperplanes in three or more dimensions. The complexity of this problem is open (no hardness results are known) and, until now, only a recent (2d)-approximation algorithm was known. In this talk, we make a step forward in answering this sixteen year old open question by designing a PTAS. In a high-level view, we will see the underlying ideas, including a (new) sparsification technique for polytopes and a linear program that computes the shortest tour.
This talk is based on work together with Antonios Antoniadis, Krzysztof Fleszar, and Kevin Schewior.

Syamantak Das (University of Bremen)
In the Survivable Network Design Problem (SNDP), we are given an undirected graph G together with costs on edges or vertices and the demand function k on source-sink demand pairs (si,ti,ki), where each si,ti are vertices in the graph and ki is a natural number. In the edge-connectivity setting (EC-SNDP), we are interested in the minimum cost subgraph H of G in which every demand pair (si,ti) is connected via ki edge-disjoint paths. These problems are natural generalizations of Steiner tree and Steiner forest, which capture the fault-tolerant requirements in networks.
In general, these problems are NP-hard, and even restrictive special cases of SNDP post non-trivial challenges. As such, SNDP has drawn a lot of attention from researchers for the past decades. Special cases of SNDP, like Steiner Tree, Steiner Forest, 2-Edge connected subgraph etc. has been studied extensively on many special classes of graphs, e.g. planar graphs, Euclidean metrics, and bounded treewidth graphs.
In this talk, we shall discuss some recent results on exact algorithms for the general SNDP parameterized by the treewidth of the graph. In particular, we shall demonstrate a fairly general dynamic programming framework on the basis of the tree-decomposition of the graph which can be applied to solve a bunch of related network design problems.

Thomas Schneider (University of Bremen)
Modal logic and its syntactic variants are widely used in knowledge representation and verification. Applications in those areas require modal logics with a careful balance of the rivalling requirements "high expressivity" and "low computational complexity". The talk will give an overview of our contributions to "taming" expressive temporal, description, and hybrid logics, which consist in identifying fragments with good computational complexity and studying modularity. These contributions fall into three groups: (1) systematic studies of the complexity of sub-Boolean fragments of expressive modal and description logics, (2) theoretic and practical studies of module extraction and modularisation of ontologies, and (3) a study of the complexity of lightweight combinations of temporal and description logics.
The talk will be accessible to a general audience with a background in (theoretical) computer science.

Rolf Drechsler (University of Bremen)
New challenges need to be faced during modeling, design, implementation, verification and test of Cyber‐Physical Systems (CPS). For the design of hardware and software in this context, new approaches need to be developed, taking into account also non‐functional requirements. This talk gives an overview on challenges and recent developments with a special focus on verification and correctness.

Tobias Mömke (University of Bremen)
We study the problem of finding a tour of n points in which every edge is long. More precisely, we wish to find a tour that visits every point exactly once, maximizing the length of the shortest edge in the tour. The problem is known as Maximum Scatter TSP, and was introduced by Arkin et al. (SODA 1997), motivated by applications in manufacturing and medical imaging. Arkin et al. gave a 0.5-approximation for the metric version of the problem and showed that this is the best possible ratio achievable in polynomial time (assuming P≠NP). Arkin et al. raised the question of whether a better approximation ratio can be obtained in the Euclidean plane. We answer this question in the affirmative in a more general setting, by giving a (1−ε)-approximation algorithm for d-dimensional doubling metrics, with running time ~O(n3+2O(Klog(K))), where K≤(13/ε)d. As a corollary we obtain (i) an efficient polynomial-time approximation scheme (EPTAS) for all constant dimensions d, (ii) a polynomial-time approximation scheme (PTAS) for dimension d=(loglog(n))/c, for a sufficiently large constant c, and (iii) a PTAS for constant d and ε=Ω(1/loglog(n)). Furthermore, we show the dependence on d in our approximation scheme to be essentially optimal, unless Satisfiability can be solved in subexponential time.

Ruben Hoeksma (University of Bremen)
In this talk, we discuss the following setting. A gambler is presented with a set of random variables that come to him in a random order. When the variable arrives, the gambler sees its value and decides to take it or go on to the next. If the gambler takes it, he gets an amount of money equal to the value of the variable and the game ends, otherwise he is presented the next random variable. The gamblers aim is to maximize the expected amount of money he receives. This setting is a classic example of an optimal stopping problem and is known as the Prophet Secretary Problem. For this problem we analyze two different threshold strategies. One non-adaptive strategy, where the thresholds are chosen before any of the variables arrive and before the order is known, and one adaptive strategy, where thresholds are chosen based on the variables that have been rejected prior. For the non-adaptive strategy we show that the gambler can, in expectation, get at least 1−1/e times the expected maximum value among the random variables (i.e. the value a prophet would receive). We show that no non-adaptive strategy can do better, even for i.i.d. random variable. For the adaptive strategy, we analyze only the i.i.d. setting and show that this improves the gamblers expectation to 0.745 times the expected maximum. This results settles a conjecture by Kertz from 1986. Finally, we discuss the implications for posted price mechanisms, a commonly used mechanism for selling goods. We show that the results for threshold strategies imply similar bounds for comparisons of posted price mechanisms with optimal auctions.

Dmitry Feichtner-Kozlov (University of Bremen)
Motivated by higher-dimensional coboundary expansion we define a certain family of graphs which are called Cheeger graphs. Little is known at present about these graphs. We will make a short self-contained introduction into the subject and formulate several open purely graph-theoretic problems.

Antonios Antoniadis (Max-Planck-Institut für Informatik)
We investigate online convex optimization with switching costs (OCO; Lin et al., INFOCOM 2011), a natural online problem arising when rightsizing data centers: A server initially located at p0 on the real line is presented with an online sequence of non-negative convex functions f1,f2,…,fn:R→R+. In response to each function fi, the server moves to a new position pi on the real line, resulting in cost |pi−pi−1|+(fipi). The total cost is the sum of costs of all steps. One is interested in designing competitive algorithms.
In this talk, we solve the problem in the classical sense: We give a lower bound of 2 on the competitive ratio of any possibly randomized online algorithm, matching the competitive ratio of previously known deterministic online algorithms (Andrew et al., COLT 2013/arXiv 2015; Bansal et al., APPROX 2015). It has been previously conjectured that (2−ε)-competitive algorithms for some ε>0 (Bansal et al., APPROX 2015).
Joint work with Kevin Schewior, to appear in WAOA 2017.

Peter Kling (Universität Hamburg)
We consider n jobs that have to be scheduled on m processors. The processors share a common, continuously divisible resource. In each discrete time step, the scheduler can change the assignment of the resource to the processors. Jobs come with a processing volume and a resource requirement. When a processor currently has a share R of the resource and works on a job with processing volume p and resource requirement r, exactly min{R/r,1} units of the job's processing volume are processed. That is, doubling the resource also doubles the speed a job is processed with, but this speedup is capped. Migration and preemption are not allowed, and the goal is to minimize the makespan.
In this talk, I present an 1+O(1/m)-approximation algorithm if all jobs have unit work and an 2+O(1/m)-approximation algorithm for jobs with arbitrary workload. The algorithms employ a sliding window technique and are quite simple and intuitive. This problem generalizes bin packing with cardinality constraints and splittable items, for which our results improve the approximation ratio achieved by a fast and simple algorithm from previously 2−O(1/m) to 1+O(1/m).
This is joint work with Alexander Mäcker, Sören Riechers, and Alexander Skopalik.

Jan Hackfeld (TU Berlin)
We consider the problem of exploring an undirected and initially unknown graph by a set of cooperative agents. The vertices of the graph are unlabeled, and the edges incident to a vertex have locally distinct labels.In this setting, it is known that for one agent Θ(log n) bits of memory are necessary and sufficient to explore any graph with at most n vertices. We show that O(log log n) cooperative agents with only constant memory can explore any graph with at most n vertices in polynomial time. We match this result with a lower bound showing that the number of agents needed for exploring any graph with at most n vertices is already Ω(log log n) when we allow each agent to have at most O(log n 1−ε ) bits of memory for some ε > 0. In particular, this implies that Ω(log log n) agents with constant memory are necessary for exploration.

Markus Jablonka (University of Bremen)
Submodular functions are set functions that capture the natural "diminishing returns" property and can be seen as a discrete analogy to convex functions. The problem of maximizing a submodular function is as follows: Given a set E of elements and a set function f:2E→Q (typically through an oracle), we want to determine a subset S of E with maximum value f(S). This is an NP-hard problem for which several approximation algorithms are known. These algorithms are either randomized or adaptive, in the sense that they choose which function values to look up after having already seen some others, and in this way adapt to the function. In this talk we discuss a non-adaptive, or oblivious, deterministic approach. All sets for which the value of the function will be queried will be chosen from the beginning as a list. The set of this list with the highest function value will then be returned as an approximation of the problem. The focus of the talk will be choosing such a list and determining the quality of this solution.

Syamantak Das (University of Bremen)
This will be a talk covering my general research interests, approximation and online algorithms for scheduling and network design.

First, I will speak about online scheduling problems. We propose a new model on top of the classical competitive analysis that we believe is useful in dealing with uncertainty in the input data. We show that one can design simple online algorithms with good guarantees if one is allowed to reject, that is, not process a tiny fraction of the input. We propose this in the light of the fact that such an algorithm is not possible for optimizing certain classical scheduling objectives like makespan minimization or minimizing the (weighted) maximum flow-time.

Joint work with: Anamitra Roy Choudhury, Naveen Garg and Amit Kumar.

In the second part, I shall talk about network design problems. We give improved approximation guarantees for a classical network design problem called the Group Steiner Tree, when the input graph has a special structure. In particular, we prove a generic reduction of a problem instance on such graphs to a problem instance on trees that allows us to carry out the desired improvement. The reduction may be of independent interest.

Joint work with: Parinya Chalermsook, Bundit Laekhanukit and Daniel Vaz.

Nicole Megow (University of Bremen)
Minimizing resource usage is a key to achieving economic, environmental, or societal goals. We consider the fundamental problem of minimizing the number of machines that is necessary for feasibly scheduling preemptive jobs with release dates and hard deadlines. We consider the online variant of this problem in which every job becomes known to the online algorithm only at its release date. We present new competitive algorithms for this problem. We also discuss the power of job migration and show somewhat surprising strong lower bounds on the best possible performance guarantee.

Joint work with: Lin Chen and Kevin Schewior.

Marek Adamczyk (University of Bremen)
Consider the following variant of the set cover problem. We are given a universe U={1,...,n} and a collection of subsets C={S1,...,Sm} where Si⊆U. For every element u∈U we need to find a set ϕ(u)∈C such that u∈ϕ(u). Once we construct and fix the mapping ϕ:U↦C a subset X⊆U of the universe is revealed, and we need to cover all elements from X with exactly ϕ(X):=⋃u∈Xϕ(u). The goal is to find a mapping such that the cover ϕ(X) is as cheap as possible. This is an example of a universal problem where the solution has to be created before the actual instance to deal with is revealed. Such problems appear naturally in some settings when we need to optimize under uncertainty and it may be actually too expensive to begin finding a good solution once the input starts being revealed. A rich body of work was devoted to investigate such problems under the regime of worst case analysis, i.e., when we measure how good the solution is by looking at the worst-case ratio: universal solution for a given instance vs optimum solution for the same instance. As the universal solution is significantly more constrained, it is typical that such a worst-case ratio is actually quite big. One way to give a viewpoint on the problem that would be less vulnerable to such extreme worst-cases is to assume that the instance, for which we will have to create a solution, will be drawn randomly from some probability distribution. In this case one wants to minimize the expected value of the ratio: universal solution vs optimum solution. Here the bounds obtained are indeed smaller than when we compare to the worst-case ratio. But even in this case we still compare apples to oranges as no universal solution is able to construct the optimum solution for every possible instance. What if we would compare our approximate universal solution against an optimal universal solution that obeys the same rules as we do? In the talk we will see that under this viewpoint, but still in the stochastic variant, we can indeed obtain better bounds than in the expected ratio model. For the Set Cover problem --- on which the main ideas and characteristics of the model will be presented --- we will see how one can obtain Hn approximation, which matches the approximation ratio from the classic deterministic setup. Moreover, this result holds for all possible probability distributions over U that have a polynomially large carrier, while all previous results pertained to a model in which elements were sampled independently. The result is based on rounding a proper configuration IP that captures the optimal universal solution, and using tools from submodular optimization.

Joint work with: Fabrizio Grandoni, Stefano Leonardi and Michał Włodarczyk.

Felix Fischer (University of Glasgow)
The area of mechanism design is concerned with the development of algorithms that produce good outcomes given inputs from self-interested individuals. Much of mechanism design theory is dominated by the Vickrey-Clarke-Groves (VCG) mechanism, which makes it optimal for each individual to provide its true input. In the real world the VCG mechanism is used with surprising rarity, and some reasons for this are known. We identify another property of the mechanism that may be problematic in practice, a relative lack of robustness to inaccuracies in the choice of its parameters.

Joint work with Paul Dütting and David C. Parkes.