Prof. Dr. Tobias Friedrich

# Recorded Talks

Due to the SARS-Covid-2 outbreak, many conferences switched to online solutions in order to be held regardless of the situation. As a result, many talks were prerecorded so that the audience could watch them from home. An advantage thereof is that these talks can be rewatched later on. Here, we gather all prerecorded talks of our group members in order for you to enjoy them one more time!

To view a talk, please click on "Link". You will be redirected to the video. To see the information of the talk, either click on the conference acronym or the title of the respective talk. Please note that the talks are sorted lexicographically with respect to conference acronym and the year.

## Talks

ConferenceTitle

Conference: Algorithmic Approaches for Transportation Modelling, Optimization, and Systems 2020

Speaker: Maximilian Böther

Abstract: Traditional navigation services find the fastest route for a single driver. Though always using the fastest route seems desirable for every individual, selfish behavior can have undesirable effects such as higher energy consumption and avoidable congestion, even leading to higher overall and individual travel times. In contrast, strategic routing aims at optimizing the traffic for all agents regarding a global optimization goal. We introduce a framework to formalize real-world strategic routing scenarios as algorithmic problems and study one of them, which we call Single Alternative Path (SAP), in detail. There, we are given an original route between a single origin–destination pair. The goal is to suggest an alternative route to all agents that optimizes the overall travel time under the assumption that the agents distribute among both routes according to a psychological model, for which we introduce the concept of Pareto-conformity. We show that the SAP problem is NP-complete, even for such models. Nonetheless, assuming Pareto-conformity, we give multiple algorithms for different variants of SAP, using multi-criteria shortest path algorithms as subroutines. Moreover, we prove that several natural models are in fact Pareto-conform. The implementation and evaluation of our algorithms serve as a proof of concept, showing that SAP can be solved in reasonable time even though the algorithms have exponential running time in the worst case.

This is joined work with Thomas Bläsius, Philipp Fischbeck, Tobias Friedrich, Alina Gries, Falk Hüffner, Otto Kißig, Pascal Lenzner, Louise Molitor, Leon Schiller, Armin Wells and Simon Witheger.

Conference: European Symposium on Algorithms 2020

Speaker: Martin Schirneck

Abstract: In non-uniform hypergraphs there exists a phenomenon unknown to graphs: some edge may be contained in another, with edges even forming chains of inclusions. In many algorithmic applications we are only interested in the collection of inclusion-wise minimal edges, called the minimization of the hypergraph.

In the video we highlight our recent results on the minization of maximum-entropy hypergraphs with a prescribed number of edges and expected edge size. We give tigh bounds on the expected number of minimal edges and briefly touch on the tools used in the proofs. The most important technical contribution is an improvement of the Chernoff-Hoeffding theorem on the tail of the binomial distribution. In particular, we show that for a random variable $$X \sim \mathrm{Bin}(n,p)$$ and any $$0 < x < p$$, it holds that $$\mathrm{P}[X \le xn] = \Theta( 2^{-\mathrm{D}(x \,{\|}\, p) n}/\sqrt{n})$$, where $$\mathrm{D}$$ is the Kullback-Leibler divergence from information theory.

This is joined work with Thomas Bläsius and Tobias Friedrich.

Conference: European Conference on Evolutionary Computation in Combinatorial Optimisation 2020

Speaker: Martin Krejca

Abstract: In their recent work, Lehre and Nguyen (FOGA 2019) show that the univariate marginal distribution algorithm (UMDA) needs time exponential in the parent populations size to optimize the DeceivingLeadingBlocks (DLB) problem. They conclude from this result that univariate EDAs have difficulties with deception and epistasis. In this work, we show that this negative finding is caused by an unfortunate choice of the parameters of the UMDA. When the population sizes are chosen large enough to prevent genetic drift, then the UMDA optimizes the DLB problem with high probability with at most $$\lambda(\frac{n}{2} + 2 e \ln n)$$ fitness evaluations. Since an offspring population size $$\lambda$$ of order $$n \log n$$ can prevent genetic drift, the UMDA can solve the DLB problem with $$O(n^2 \log n)$$ fitness evaluations. In contrast, for classic evolutionary algorithms no better run time guarantee than $$O(n^3)$$ is known, so our result rather suggests that the UMDA can cope well with deception and epistatis. Together with the result of Lehre and Nguyen, our result for the first time rigorously proves that running EDAs in the regime with genetic drift can lead to drastic performance losses.

This is joined work with Benjamin Doerr.

Conference: International Conference on Neural Information Processing 2020

Speaker: Arthur Zahn

Abstract: Time series are sequences of data indexed by time. Such data are collected in various domains, often in massive amounts, such that storing them proves challenging. Thus, time series are commonly stored in a compressed format. An important compression approach is piecewise linear approximation (PLA), which only keeps a small set of time points and interpolates the remainder linearly. Picking a subset of time points such that the PLA minimizes the mean squared error to the original time series is a challenging task, naturally lending itself to heuristics. We propose the piecewise linear approximation genetic algorithm (PLA-GA) for compressing time series by PLA. The PLA-GA is a memetic (μ+λ) GA that makes use of two distinct operators tailored to time series compression. First, we add special individuals to the initial population that are derived using established PLA heuristics. Second, we propose a novel local search operator that greedily improves a compressed time series. We compare the PLA-GA empirically with existing evolutionary approaches and with a deterministic PLA algorithm, known as Bellman's algorithm, that is optimal for the restricted setting of sampling. In both cases, the PLA-GA approximates the original time series better and quicker. Further, it drastically outperforms Bellman's algorithm with increasing instance size with respect to run time until finding a solution of equal or better quality -- we observe speed-up factors between 7 and 100 for instances of 90,000 to 100,000 data points.

This is joined work with Tobias Friedrich, Martin Krejca, Gregor Lagodzinski, and Manuel Rizzo.

Conference: International Symposium on Parameterized and Exact Computation 2020

Speaker: Davis Issac

Abstract: We develop an FPT algorithm and a compression for the Weighted Edge Clique Partition (WECP) problem, where a graph with $$n$$ vertices and integer edge weights is given together with an integer $$k$$, and the aim is to find $$k$$ cliques, such that every edge appears in exactly as many cliques as its weight. The problem has been previously only studied in the unweighted version called Edge Clique Partition (ECP), where the edges need to be partitioned into $$k$$ cliques. It was shown that ECP admits a kernel with $$k^2$$ vertices [Mujuni and Rosamond, 2008], but this kernel does not extend to WECP. The previously fastest algorithm known for ECP has a runtime of $$2^{O(k^2)}n^{O(1)}$$ [Issac, 2019 (PhD Thesis)]. For WECP we develop a compression (to a slightly more general problem called Annotated WECP) with at most $$4^k$$ vertices, and an algorithm with runtime $$2^{O(k^{3/2}w^{1/2}\log(k/w))}n^{O(1)}$$, where $$w$$ is the maximum edge weight. The latter in particular improves the runtime for ECP to $$2^{O(k^{3/2}\log k)}n^{O(1)}$$.

This is joined work with Andreas Emil Feldmann and Ashutosh Rai.

Conference: Machine Learning for Quantum 2021

Speaker: Timo Kötzing

Abstract: The performance of modern algorithms in computer science typically depends on a number of parameters which govern the behavior of the algorithm. Setting these parameters just right is a complex task, typically dependent on the exact area of application of the algorithm, i.e. on the data to be given to the algorithm. Traditionally, algorithm designers might play with these parameters some, using their detailed knowledge of the inner workings of the algorithm, in order to find good parameter settings. However, more and more this process can be automated by parameter tuning algorithms which explore the space of available parameter settings, evaluating possible choices along the way. One way to explore is Heuristic Search, iteratively generating more and more possible parameter settings similar to previous promising settings.
Regarding calibrating multi-qubit circuits, the general structure of the problem is the same as for tuning the parameters of algorithms: There is a set of allowed settings for the parameters, each of these settings has an associated quality (which might suffer from noise). In order to find an element of good quality in this search space, the same principles (exploration vs. exploitation trade-off, search adaptation, surrogate models, hardware in the loop, …) would also be applicable in this setting. Thus, I want to present some of these ideas to you and discuss possibilities and limitations.

Conference: International Conference on Very Large Data Bases 2020

Speaker: Thomas Bläsius

Abstract: Unique column combinations (UCCs) are a fundamental concept in relational databases. They identify entities in the data and support various data management activities. Still, UCCs are usually not explicitly defined and need to be discovered. State-of-the-art data profiling algorithms are able to efficiently discover UCCs in moderately sized datasets, but they tend to fail on large and, in particular, on wide datasets due to run time and memory limitations. In this video, we introduce HPIValid, a novel UCC discovery algorithm that implements a faster and more resource-saving search strategy. HPIValid models the metadata discovery as a hitting set enumeration problem in hypergraphs.In this way, it combines efficient discovery techniques from data profiling research with the most recent theoretical insights into enumeration algorithms. Our evaluation shows that HPIValid is not only orders of magnitude faster than related work, it also has a much smaller memory footprint.

This is joined work with Johann Birnick, Tobias Friedrich, Felix Naumann, Thorsten Papenbrock, and Martin Schirneck.

Abstract: The cost-distance Steiner tree problem asks for a Steiner tree in a graph that minimizes the total cost plus a weighted sum of path delays from the root to the sinks. We present an improved approximation for the uniform cost-distance Steiner tree problem, where the delay of a path corresponds to the sum of edge costs along that path. Previous approaches deploy a bicriteria approximation algorithm for the length-bounded variant that does not take the actual delay weights into account. Our algorithm modifies a similar algorithm for the single-sink buy-at-bulk problem by Guha et al. [2009], allowing a better approximation factor for our problem. In contrast to the bicriteria algorithms it considers delay weights explicitly. Thereby, we achieve an approximation factor of (1+ $$\beta$$ ), where $$\beta$$ is the approximation factor for the Steiner tree problem. This improves the previously best known approximation factor for the uniform cost-distance Steiner tree problem from 2:87 to 2:39. This algorithm can be extended to the problem where the ratio of edge costs to edge delays throughout the graph is bounded from above and below. In particular, this shows that a previous inapproximability result (Chuzhoy et al. [2008]) requires large variations between edge delays and costs. Finally, we present an important application of our new algorithm in chip design. The cost-distance Steiner tree problem occurs as a Lagrangean subproblem when optimizing millions of Steiner trees with mutually depending path length bounds. We show how to quickly approximate a continuous relaxation of this problem with our new algorithm.