Prof. Dr. Tobias Friedrich

Research Seminar (Summer Term 2019)


A series of talks on current research and interesting topics in algorithm engineering and theoretical computer science. Everyone is welcome to attend talks. 

If you want to give a talk about a topic in algorithm engineering or theoretical computer science, please write an e-mail to Prof. Dr. Tobias Friedrich or to Ralf Rothenberger.


For announcements of talks, subscribe to our mailing list.

Talks (click on title to show abstract)

DateRoom SpeakerTitle
03.04. 11:00A-1.1Sebastian Siebertz
The notions of bounded expansion and nowhere denseness offer abstract and general definitions of uniform sparseness in graphs. Many familiar sparse graph classes, such as the planar graphs, and in fact all graphs excluding a fixed minor or topological minor, have bounded expansion and are nowhere dense. Classes with these properties admit efficient algorithms for local problems, such as the independent set problem, dominating set problem, and the model-checking problem for first-order logic. In this talk I will give an introduction to the theory of bounded expansion and nowhere dense graph classes with a focus on local separation properties that lead to efficient algorithms.
25.04. 13:30H-E.52Hans Gawendowicz, Jannik Peters, Fabian Sommer, Daniel Stephan
Real-world graphs such as social networks often consist of multiple communities. Finding these communities using the graph structure is an important task in data analytics. A common heuristic to do so is the Louvain algorithm [Blondel et al. 2008], which measures the quality of a solution using the notion of modularity, as defined by Newman and Girvan [Newman and Girvan 2004]. In the talk, we present two theorems that give additional insight into the structure of such communities and into the low runtime of the Louvain algorithm. We also motivate multiple modifications to the Louvain algorithm and measure their impact on the runtime and the quality of the resulting communities in different graphs. Furthermore, we provide experimental results for two conjectures which could be the subject of further research.
29.04. 11:00H-2.57Master Project Team WS18

Residential areas tend to segregate over time along racial/ethnical, religious or socio-economic lines. To demonstrate that this can happen even with very tolerant residents Schelling introduced a simple toy model which has become influential in sociology. In Schelling’s model two types of agents are placed on a grid and an agent is content with her location if the fraction of her neighbors on the grid which have the same type as her is above \(\tau\) , for some \(0 < \tau < 1\). Discontent agents simply swap their location with a randomly chosen other discontent agent or they jump to a random empty cell. Simulations of this process typically reveal that the society of agents segregates into a few homogeneous regions even for \(\tau < 12\) , i.e., when all agents are tolerant. Despite the popularity of Schelling’s model, many questions about its properties are still open and some of them have been attacked only very recently.

This paper presents the next step towards a rigorous analysis. We map the boundary of when convergence to an equilibrium is guaranteed. For this we prove several sharp threshold results where guaranteed convergence suddenly turns into the strongest possible non-convergence result: a violation of weak acyclicity. In particular, we show such threshold results also for Schelling’s original model, which is in stark contrast to the standard assumption in many empirical papers. On the conceptual side, we investigate generalizations of Schelling’s model for more than two agent types and we allow more general underlying graphs which model the residential area. We show that the number of agent types and the topology of the underlying graph heavily influence the dynamic properties, the computational hardness of finding an optimum agent placement and the obtained segregation strength. For the latter, we provide empirical results which indicate that geometry is essential for strong segregation.

16.05. 13:30A-1.1Thomas Bläsius

The hardness of formulas at the solubility phase transition of random propositional satisfiability (SAT) has been intensely studied for decades both empirically and theoretically. Solvers based on stochastic local search (SLS) appear to scale very well at the critical threshold, while complete backtracking solvers exhibit exponential scaling. On industrial SAT instances, this phenomenon is inverted: backtracking solvers can tackle large industrial problems, where SLS-based solvers appear to stall. Industrial instances exhibit sharply different structure than uniform random instances. Among many other properties, they are often heterogeneous in the sense that some variables appear in many while others appear in only few clauses.

We conjecture that the heterogeneity of SAT formulas alone already contributes to the trade-off in performance between SLS solvers and complete backtracking solvers. We empirically determine how the run time of SLS vs. backtracking solvers depends on the heterogeneity of the input, which is controlled by drawing variables according to a scale-free distribution. Our experiments reveal that the efficiency of complete solvers at the phase transition is strongly related to the heterogeneity of the degree distribution. We report results that suggest the depth of satisfying assignments in complete search trees is influenced by the level of heterogeneity as measured by a power-law exponent. We also find that incomplete SLS solvers, which scale well on uniform instances, are not affected by heterogeneity. The main contribution of this paper utilizes the scale-free random 3-SAT model to isolate heterogeneity as an important factor in the scaling discrepancy between complete and SLS solvers at the uniform phase transition found in previous works.

21.05. 11:00A-1.1Warut Suksompong

We consider strategic games that are inspired by Schelling’s model of residential segregation. In our model, the agents are partitioned into k types and need to select locations on an undirected graph. Agents can be either stubborn, in which case they will always choose their preferred location, or strategic, in which case they aim to maximize the fraction of agents of their own type in their neighborhood. We investigate the existence of equilibria in these games, study the complexity of finding an equilibrium outcome or an outcome with high social welfare, and also provide upper and lower bounds on the price of anarchy and stability. Some of our results extend to the setting where the preferences of the agents over their neighbors are defined by a social network rather than a partition into types. (Joint work with Edith Elkind, Jiarui Gan, Ayumi Igarashi, and Alexandros Voudouris)

22.05. 13:30A-1.2Fabian Sommer

Hyperbolic random graphs are a model of generating random graphs by drawing points from a hyperbolic disk. These graphs have a small diameter, high clustering and power-law degree distribution. Therefore, they are similar to many real world graphs such as social networks.

However, this resemblance is not perfect. For example, all nodes that are close to the center of the disk form a very large clique. Even with temperature, these nodes still form a very dense subgraph. Another property of hyperbolic random graphs is that after removing all central nodes, the remaining graph forms a ring which has a low pathwidth. While this matches the behavior of many real-world networks, there are some that do not conform to this.

In this talk, we explore different measurements that could reflect these differences and possible approaches to generate more realistic graphs.

27.05. 11:00H-E.52Karl Bringmann

All Pairs Shortest Paths can be solved exactly in \(\mathcal{O}(n^3)\) time, and no much better algorithms are known on dense graphs. Relaxing the goal, Zwick showed how to compute a \((1+\varepsilon)\)-approximation by utilizing fast matrix multiplication. This algorithm be used to approximate several graph characteristics including the diameter, radius, median, minimum-weight triangle, and minimum-weight cycle.

For integer edge weights bounded by \(W\), Zwick's running time involves a factor \(\log(W)\). Here we study whether this factor can be avoided, that is, whether APSP and related problems admit "strongly polynomial" approximation schemes, whose running time is independent of \(W\). We remove the \(\log(W)\)-factor from Zwick's running time for APSP on undirected graph as well as for several graph characteristics on directed or undirected graphs. For APSP on directed graphs, we design a strongly polynomial approximation scheme with a worse dependence on n compared to Zwick's. We also explain why: Any improvement over our exponent would improve the best known algorithm for MinMaxProduct. In fact, we prove that approximating directed APSP and exactly computing the MinMaxProduct are equivalent.

Joint work with Marvin Künnemann and Karol Wegrzycki.

12.06. 10:00H-E.52Elias J. Schneider

The research into active matter and micro-swimmers in particular, is getting increasingly important due to the growing development of microtechnology. For certain tasks it may be useful to know the fastest path of a micro-swimmer, such as a bacterium, from one location to another under given boundary conditions. This master’s thesis deals with the magnetotactic bacterium Magnetococcus marinus as an example of steerable micro-swimmers. It has a constant intrinsic thrust and can be guided by an external magnetic field. In a given two-dimensional liquid environment, the micro-swimmer is to be steered in a way so that he reaches from start to finish through a given potential landscape, such as a sombrero-potential, in the shortest time. The problem is solved both through the calculus of variations and through Q-learning, which is a method of reinforcement learning. In the Q-learning-method, the micro-swimmer uses artificial intelligence to explore the fastest route.

20.06. 13:30A-1.1Hagen Echzell, Simon Krogmann

Hagen Echzell: Flow-based Network Creation Games

Simon Krogmann: Facility Location Games on Networks

21.06. 13:30A-1.2Sören Tietböhl

VSIDS for (1+1) EA Mutation Operators

26.06. 13:30A-1.2Ralf Rothenberger

Propositional satisfiability (SAT) is one of the most fundamental problems in computer science. Its worst-case hardness lies at the core of computational complexity theory, for example in the form of NP-hardness and the (Strong) Exponential Time Hypothesis. In practice however, SAT instances can often be solved efficiently. This contradicting behavior has spawned interest in the average-case analysis of SAT and has triggered the development of sophisticated rigorous and non-rigorous techniques for analyzing random structures. Despite a long line of research and substantial progress, most theoretical work on random SAT assumes a uniform distribution on the variables. In contrast, real-world instances often exhibit large fluctuations in variable occurrence. This can be modeled by a non-uniform distribution of the variables, which can result in distributions closer to industrial SAT instances. We study satisfiability thresholds of non-uniform random 2-SAT with \(n\) variables and \(m\) clauses and with an arbitrary probability distribution \( (p_i)_{i\in[n]}\) with \(p_1≥p_2≥⋯≥p_n>0\) over the \(n\) variables. We show for \(p_1^2=\Theta(\sum_{i=1}^n p_i^2)\) that the asymptotic satisfiability threshold is at \(m=\Theta((1-\sum_{i=1}^n p_i^2)/(p_1\cdot(\sum_{i=2}^n p_i^2)^{1/2}))\) and that it is coarse. For \(p_1^2=o(\sum_{i=1}^n p_i^2)\) we show that there is a sharp satisfiability threshold at \(m=(\sum_{i=1}^n p_i^2)^{−1}\). This result generalizes the seminal works by Chvatal and Reed [FOCS 1992] and by Goerdt [JCSS 1996].

27.06. 13:30A-1.2Vanja Doskoč

2, 4, 8, 16, 32,... What is the next element in this sequence? What is the underlying set?

Confronted with the latter question, the odd natural numbers probably would not come to mind as a legitimate answer immediately. This arises from a seemingly natural desire for consistency. However, the question is whether such a desire restricts one's actual learning power? Naturally, one can ask the same question for various other learning strategies.

We investigate how different forms of overgeneralization known from literature interact and whether they restrain learning power.

03.07. 13:30A-1.2Frank Neumann

Evolutionary diversity optimization aims to compute a diverse set of solutions where all solutions meet a given quality criterion.
In this talk, I will give an overview on work that we carried out during the last year in this area.
Furthermore, I will point out some research questions that we plan to tackle within a project funded by the Australian Research Council during 2019-2021.

26.07. 14:30A-2.2Manuel Rizzo

In a context of physical models, calibration is the process of estimating the parameters of the model built to reproduce the behaviour of a real complex machine. Such parameters must be chosen in such a way that the model output is as close as possible to the output of the real machine. In this talk I will summarize the first advances within the Virtual Compressor project, jointly carried out with the company partner Industrial Analytics. In the project framework the real machine is a turbocompressor and a complex physical model is developed for dealing with the real measured data, in the form of time series. After a preprocessing step, the data are employed in the model calibration problem. The proposed approach uses global optimization algorithms such as Genetic Algorithms and Particle Swarm Optimization. Some first results and future research questions will be described and outlined.

13.08. 13:30A-1.1Ljiljana Brankovic

We design and analyse simple search tree algorithms for approximating 3-Hitting Set, focussing on the case of factor-2 approximations. The main ingredients of our algorithms are exact and approximation-preserving reduction rules. We also derive a few results for hypergraph instances of bounded degree, including a polynomial-time approximation algorithm.

20.08. 10:00A-1.1Karen Seidel

The present work contributes to a recent line of research studying algorithmically random infinite structures and uncovers an interesting connection between the arithmetical complexity of the set of generators of a randomly generated subgroup of rationals and the learnability of its finitely generated subgroups.

Given a randomly generated additive subgroup (G,+) of rationals, two main questions are addressed: first, what are the model-theoretic and recursion-theoretic properties of (G,+); second, what learnability properties can one extract from G and its subclass of finitely generated subgroups? For the first question, it is shown that the theory of (G,+) coincides with that of the additive group of integers and is therefore decidable; furthermore, while the word problem for G with respect to any generating sequence for G is not even semi-decidable, one can build a generating sequence \(\beta\) such that the word problem for G with respect to \(\beta\) is co-recursively enumerable (assuming that the set of generators of G is limit-recursive).

In regard to the second question, it is proven that there is a generating sequence \(\beta\) for G such that every non-trivial finitely generated subgroup of G is recursively enumerable and the class of all such subgroups of G is behaviourally correctly learnable, that is, every non-trivial finitely generated subgroup can be semantically identified in the limit (again assuming that the set of generators of G is limit-recursive). On the other hand, the class of non-trivial finitely generated subgroups of G cannot be syntactically identified in the limit with respect to any generating sequence for G.

03.09. 13:30A-1.1Thomas Sauerwald
We establish and generalise several bounds for various random walk quantities including the mixing time and the maximum hitting time. Unlike previous analyses, our derivations are based on rather intuitive notions of local expansion properties which allows us to capture the progress the random walk makes through t-step probabilities.
We apply our framework to dynamically changing graphs, where the set of vertices is fixed while the set of edges changes in each round. For random walks on dynamic connected graphs for which the stationary distribution does not change over time, we show that their behaviour is in a certain sense similar to static graphs. For example, we show that the mixing and hitting times of any sequence of d-regular connected graphs is \(O(n^2)\), generalising a well-known result for static graphs.
Joint Work with Luca Zanetti
05.09. 13:30A-1.1Christopher Weyand

Hyperbolic random graphs (HRG) and geometric inhomogeneous random graphs (GIRG) are two similar generative network models that were designed to resemble complex real world networks. In particular, they have a power-law degree distribution with controllable exponent \(\beta\), and high clustering that can be controlled via the temperature \(T\).
We present the first implementation of an efficient GIRG generator running in expected linear time. Besides varying temperatures, it also supports underlying geometries of higher dimensions. It is capable of generating graphs with ten million edges in under a second on commodity hardware. The algorithm can be adapted to HRGs. Our resulting implementation is the fastest sequential HRG generator, despite the fact that we support non-zero temperatures. Though non-zero temperatures are crucial for many applications, most existing generators are restricted to \(T = 0\). We also support parallelization, although this is not the focus of this paper. Moreover, we note that our generators draw from the correct probability distribution, i.e., they involve no approximation.

11.09. 13:30A-1.1Ardalan Khazraei

The objective of the Steiner Tree problem is to connect a set of terminals by a tree of minimum total length. We can further constrain the problem by first choosing one of the terminals as the root, and then setting upper bounds on the tree-path distance of each other terminal to this root. Using the Lagrangian relaxation of these restrictions, we can penalize large distances by linear additive terms in the objective function rather than applying strict path-length constraints. This produces a special case of the so-called Cost-Distance Steiner tree problem in which we have a single weight function on the edges.

Several results from my master's thesis concerning the Cost-Distance Steiner Tree problem will be presented. NP-hardness of the Cost-Distance Steiner Tree problem follows from the fact that it contains the regular Steiner Tree problem as a special case. I prove that the problem remains NP-hard in three restricted cases that do not contain the regular Steiner Tree problem anymore, and improve upon a previous constant-factor approximation for the single-weighted case by a new method that is sensitive to the Lagrange multipliers. Finally, for the two-dimensional geometric case I use a similar approach to extend Arora's dynamic programming method for the regular Steiner Tree problem to our problem.

16.09. 13:30A-1.1Ziena Elijazyfer

Given are an edge-weighted connected graph and a positive integer k. The goal is to partition the edges of this graph into k connected components of similar weight. In particular, the two classical objectives to maximize the lightest part (Max-Min) or to minimize the heaviest part (Min-Max) are considered. As a result, a 2-approximation algorithm will be presented. This algorithm can be used for both objectives, where in the Max-Min case the given graph has no edge weight larger than one half of the average weight.

26.09. 14:15A-2.1Davis Issac

Cliques and bicliques (complete bipartite subgraphs) are important structures that appear in many applications of graphs. We look at the problems of covering and partitioning the edges of a graph with cliques and bicliques. I will motivate the problems by giving some practical applications and showing connections to other areas such as matrix factorization. We study the problems from the point of view of parameterized complexity, where the parameter is taken to be the number of cliques (bicliques resp.) in the solution. It was known that the problems are fixed parameter tractable but the known algorithms had running times with a doubly exponential dependence on the parameter. For the partition problems, we develop algorithms with much better running times. The algorithms are based on a simple linear algebraic idea, that I will demonstrate. For the problem of covering with bicliques, we show that no improvement over the doubly exponential dependence is possible, and also give exponential lower bounds on the size of the kernel. At the end, I will point out some interesting open problems in this area.

The talk is mainly based on a joint work with Andreas Karrenbauer and L. Sunil Chandran that was published in IPEC 2016 titled "On the parameterized complexity of biclique cover and partition."

02.10. 11:00A-1.2Sören Tietböhl

Variable State Independent Decaying Sum (VSIDS) is a technique used in SAT-Solvers which yields fast solving speeds on real world networks, by making use of graph structure and communities. In this thesis we investigate whether it is possible to adapt the concept of VSIDS for MaxCut heuristics, specifically for mutation operators for the (1 + 1) evolutionary algorithm. This is an initial study to determine whether the activity concept has useful applications in the MaxCut context. We propose a method of calculating activity in a local search context for MaxCut and derive new mutation operators which use this method. We analyse the performance of the resulting algorithms experimentally on several graph classes such as Erdős-Rényi random graphs, G-set graphs, hyperbolic random graphs, SAT instances and real-world instances. Additionally, several MaxCut heuristics and a greedy local search heuristic are tested on these instances. Our comparisons show that one of our proposed mutation operators unifSigmoid and the greedy local search heuristic outperform all other mutation operators on real-world instances. Further, we provide two frameworks facilitating future experimentation with the activity concept, as well as a data inspection tool. The frameworks and tool are documented and their capabilities explained. Finally we give an outlook on problems and possibilities of the activity concept for MaxCut.