Hasso-Plattner-Institut
Prof. Dr. Tobias Friedrich
  
 

Research Seminar (Winter Term 2018)

<previous seminar | next seminar>

Description

A series of talks on current research and interesting topics in algorithm engineering and theoretical computer science. Everyone is welcome to attend talks. 

If you want to give a talk about a topic in algorithm engineering or theoretical computer science, please write an e-mail to Prof. Tobias Friedrich or to Ralf Rothenberger.

Announcements

For announcements of talks, subscribe to our mailing list.

Talks (click on title to show abstract)

Date Room Speaker Title
11.10. 11:00 H-E.52 Martin Schirneck

A reoccurring task in the design and profiling of relational data is the discovery of hidden dependencies between attributes. Enumerating those dependencies is equivalent to the transversal hypergraph problem. We devise a backtracking algorithm for the enumeration of inclusion-wise minimal hitting sets. It achieves polynomial delay on hypergraphs for which the size of the largest minimal transversal is bounded. The algorithm solves the extension problem for minimal hitting sets as a subroutine. Although this problem is NP-complete and W[3]-complete in general, we show that a careful implementation of the extension oracle can help avoiding the worst case on hypergraphs arising in the profiling of real-world databases, leading to significant speed-ups in practice.

This is joint work with Thomas Bläsius, Tobias Friedrich, Julius Lischeid, and Kitty Meeks.

14.11. 16:00 G3-E.15/16 Patrick Schäfer

A time series (TS) is a collection of values sequentially ordered in time. TS emerge in many scientific and commercial applications. One driving force behind their rising importance is the sharply increasing usage of sensors for automatic and high resolution monitoring in domains such as smart homes, machine surveillance, land cover mapping using satellite images, smart grids, weather observations, wind energy forecasting, mobility tracking, etc.

The main purpose of TS analytics is to extract meaningful knowledge based on the shape of the TS data, which implies exploiting temporal order. Though humans have some natural capacity to solve the identification of perceptually similar but not mathematically identical TS, this remains a complex problem for computers.

In this talk, I will provide an overview of the techniques applied for TS analytics, where approaches can broadly be divided into Whole Series, Shapelets, Dictionary (Bag-of-Patterns), Ensembles, and Deep-Learning. With a focus on TS classification, I will address how state-of-the-art approaches represent the fundamental shapes of a TS and how these features are used. The talk ends with an experimental evaluation of the state-of-the-art classification approaches.

15.11. 15:00 A-2.1 Marianne Thieffry

Many real-world networks share common properties like power law degree distribution, small diameters, and large clustering coefficients. Graph models used in research try to mimic real-world networks by providing similar properties. Hyperbolic random graphs in particular have recently gained popularity in network research because they implement all mentioned properties. However, generators for hyperbolic random graphs only support a limited number of dimensions for the underlying geometry, resulting in tight links between average node degree and clustering coefficients. A generalization of hyperbolic random graphs are geometric inhomogeneous random graphs (GIRGs), pro- posed by Bringmann et al. They also present a sampling algorithm to generate GIRGs that has an expected running time of O(n) that supports an arbitrary number of dimensions in the underlying geometry.

In this Thesis, we explain GIRGs and the sampling algorithm as proposed by Bringmann et al. We present the first implementation of a graph generator with underlying geometry that supports an an arbitrary numbers of dimensions. We suggest some enhancements to the sampling algorithm proposed by Bringmann et al. to boost the run-time of the generator by 20%. We also theoretically analyze configurations for the sampling algorithm to give the user of our tool more direct control of the output. Finally, we use our implementation to empirically validate theoretical findings on GIRGs.

15.11. 16:00 A-2.1 Sven Ihde

Networks are a fundamental part of our society. The recent development of devices shows that more and more networks are experiencing a shift from wire-lined to wireless ad-hoc networks. Such networks exist in numerous forms and are from a multitude of different domains. By using the tools and explanatory power of Game Theory, we explore the creation and properties of such networks.

In this work we look at Wireless Network Creation Games, where selfish agents build an ad-hoc network. In contrast to most current research on this topic, which only tries to minimize the energy consumption of an agent, we also analyse the quality of an agent’s connection by creating a trade-off between energy consumption and experienced service quality. Because of that we define multiple games by additionally imposing a parameter similar to the hop distance of internet networking, which is used to measure the real-time performance and stability of connections in a network. In these games we study Nash Equilibria, as well as the resulting dynamics created by the behaviour of selfish agents. We also provide an approximation algorithm that efficiently converges and provide empirical evidence to show that the resulting states of the Wireless Network Creation Game are close to optimal. Furthermore, we analyse the social welfare of these networks to illustrate how the selfish behaviour of agents influences the well-being of society.

23.11. 11:00 A-1.1 Katrin Casel

The very general problem of determining the quality of a given partial solution occurs basically in every algorithmic approach which computes solutions in some sense gradually. Pruning search-trees, proving approximation guarantees or the efficiency of enumeration strategies usually requires a suitable way to decide if a partial solution is a reasonable candidate to pursue. Consider for example the classical concept of minimal dominating sets for graphs. The task of finding a maximum cardinality minimal dominating set (or an approximation of it) as well as enumerating all minimal dominating sets naturally leads to solving the following extension problem: Given a graph G=(V,E) and a subset P of V, does there exists a minimal dominating set S for G which contains P.
In an attempt to study the nature of such extension tasks, we propose a general, partial-order based framework to express a broad class of what we refer to as extension problems. In essence, we consider optimisation problems in NPO with an additionally specified set of presolutions (including the solutions) and a partial order on those. We consider a number of specific problems which can be expressed in this framework. Possibly contradicting intuition, these problems tend to be NP-hard, even for problems where the underlying optimisation problem can be solved in polynomial time. This raises the question of how fixing a presolution causes this increase in difficulty. In this regard, we study the parameterised complexity of extension problems with respect to parameters related to the presolution. We further discuss relaxation of the extension constraint asking only for a solution S which preserves as much of P as possible. These considerations yield some insight into the various difficult aspects of extension problems.

14.12. 11:00 A-1.1 Thomas Sauerwald
Coalescing random walks is a fundamental stochastic process, where a set of particles perform independent discrete-time random walks on an undirected graph. Whenever two or more particles meet at a given node, they merge and continue as a single random walk. The coalescence time is defined as the expected time until only one particle remains, starting from one particle at every node. Despite recent progress the coalescence time for graphs such as binary trees, d-dimensional tori, hypercubes and more generally, vertex-transitive graphs, remains unresolved. We provide a powerful toolkit that results in tight bounds for various topologies including the aforementioned ones. The meeting time is defined as the worst-case expected time required for two random walks to arrive at the same node at the same time. As a general result, we establish that for graphs whose meeting time is only marginally larger than the mixing time (a factor of log^2 n), the coalescence time of n random walks equals the meeting time up to constant factors. This upper bound is complemented by the construction of a graph family demonstrating that this result is the best possible up to constant factors. For almost-regular graphs, we bound the coalescence time by the hitting time, resolving the discrete-time variant of a conjecture by Aldous for this class of graphs. Finally, we prove that for any graph the coalescence time is bounded by O(n^3) (which is tight for the Barbell graph); surprisingly even such a basic question about the coalescing time was not answered before this work. By duality, our results give bounds on the voter model and therefore give bounds on the consensus time in arbitrary undirected graphs. We also establish a new bound on the hitting time and cover time of regular graphs, improving and tightening previous results by Broder and Karlin, as well as those by Aldous and Fill.
07.01. 15:15 H-2.58 Manuel Penschuck
Random graph models, originally conceived to study the structure of networks and the emergence of their properties, have become an indispensable tool for experimental algorithmics. Amongst them, hyperbolic random graphs form a well-accepted family, yielding realistic complex networks while being both mathematically and algorithmically tractable. We introduce two generators MemGen and HyperGen for the \(G_{\alpha,C}(n)\) model, which distributes n random points within a hyperbolic plane and produces \(m=n\cdot d/2\) undirected edges for all point pairs close by; the expected average degree d and exponent \(2\cdot \alpha+1\) of the power-law degree distribution are controlled by \(\alpha>1/2\) and \(C\). Both algorithms emit a stream of edges which they do not have to store. MemGen keeps \(\mathcal{O}(n)\) items in internal memory and has a time complexity of \(\mathcal{O}(n\cdot\log(\log n) + m)\), which is optimal for networks with an average degree of \(d=\Omega(\log(\log n))\). For realistic values of \(d=o(n / \log^{1/\alpha}(n))\), HyperGen reduces the memory footprint to \(\mathcal{O}([n^{1-\alpha}\cdot d^{\alpha} + \log(n)]\cdot \log(n))\). In an experimental evaluation, we compare HyperGen with four generators among which it is consistently the fastest. For small \(d=10\) we measure a speed-up of 4.0 compared to the fastest publicly available generator increasing to 29.6 for \(d=1000\). On commodity hardware, HyperGen produces \(3.7e8\) edges per second for graphs with \(1e6 < m < 1e12\) and \(\alpha=1\), utilising less than 600MB of RAM. We demonstrate nearly linear scalability on an Intel Xeon Phi.
21.01. 11:00 H-E.52 Francesco Quinzan
We investigate the performance of a deterministic GREEDY algorithm for the problem of maximizing functions under a partition matroid constraint. We consider non-monotone submodular functions and monotone subadditive functions. Even though constrained maximization problems of monotone submodular functions have been extensively studied, little is known about greedy maximization of non-monotone submodular functions or monotone subadditive functions. We give approximation guarantees for GREEDY on these problems, in terms of the curvature. We find that this simple heuristic yields a strong approximation guarantee on a broad class of functions. We discuss the applicability of our results to three real-world problems: Maximizing the determinant function of a positive semidefinite matrix, and related problems such as the maximum entropy sampling problem, the constrained maximum cut problem on directed graphs, and combinatorial auction games. We conclude that GREEDY is well-suited to approach these problems. Overall, we present evidence to support the idea that, when dealing with constrained maximization problems with bounded curvature, one needs not search for (approximate) monotonicity to get good approximate solutions.
01.02. 11:00 A-1.1 Anna Melnichenko

Network Creation Games are a well-known approach for explaining and analyzing the structure, quality and dynamics of real-world networks like the Internet and other infrastructure networks which evolved via the interaction of selfish agents without a central authority. In these games selfish agents which correspond to nodes in a network strategically buy incident edges to improve their centrality. However, past research on these games has only considered the creation of networks with unit-weight edges. In practice, e.g. when constructing a fiber-optic network, the choice of which nodes to connect and also the induced price for a link crucially depends on the distance between the involved nodes and such settings can be modeled via edge-weighted graphs.

We incorporate arbitrary edge weights by generalizing the well-known model by Fabrikant et al. to edge-weighted host graphs and focus on the geometric setting where the weights are induced by the distances in some metric space. In stark contrast to the state-of-the-art for the unit-weight version, where the Price of Anarchy is conjectured to be constant and where resolving this is a major open problem, we prove a tight non-constant bound on the Price of Anarchy for the metric version and a slightly weaker upper bound for the non-metric case. Moreover, we analyze the existence of equilibria, the computational hardness and the game dynamics for several natural metrics. The model we propose can be seen as the game-theoretic analogue of a variant of the classical Network Design Problem. Thus, low-cost equilibria of our game correspond to decentralized and stable approximations of the optimum network design.

01.02. 13:30 A-1.1 Algorithm Engineering group
Each member of the Algorithm Engineering group selected one paper among the ones accepted at the conference ACM-SIAM Symposium on Discrete Algorithms (SODA19) for a 5 minutes presentation to the audience.
08.02. 13:30 A-1.1 Timo Kötzing
In my first year at the university I learned about Taylor approximations, that is, about approximating an (infinitely often differentiable) function near a given point. I had no clue what that could be good for and soon forgot the details -- until now. Benjamin Doerr brought to my attention a nifty way of using it as an intuition for what could be good potential functions to estimate stochastic drift. It turns out that Taylor approximations cannot do only that, they can also be used to make estimates of potential functions rigorous and to ease the computation of stochastic drift. In my talk I want to showcase this tool which should probably be in the toolbox of every mathematician -- and don't worry, the boring part of applying this tool can be carried out by wolfram alpha.
21.02. 13:30 A-1.1 Davide Bilò
TBA
21.02. 16:00 Hs 1 Oliver Brock
Intelligence is of great importance in our society. But intelligence also remains mysterious and is notoriously difficult to reproduce in technological artifacts. So far, every attempt to produce an intelligent machine—for example: Deep Blue, Watson, or AlphaGo—has brought about important technological progress but has failed to significantly advance our understanding of biological intelligence. Why does there seem to be such a wide gap between artificial intelligence and biological intelligence? Maybe the artificial intelligences are missing something. Maybe they are missing a body? And maybe therefore our next attempt to build a truly intelligent machine should take the form of a robot? I will argue that this is a good idea and show attempts of moving towards this goal. However, rather than creating an apocalyptic robotic overlord who will end all of humanity, I will advocate to first go after a much more humble (and maybe more realistic) objective, namely, to attempt to replicate the surprising abilities of cockatoos.
22.02. 13:30 A-2.1 Jens Fischer
Segregation based on race, political and religious views or social norms is a reoccurring phenomenon throughout history. Thomas Schelling proposed in 1969 a model for this phenomenon accounting for the preferences of each individual regarding their neighbors. He showed segregation in his model both for agents having heterophob or heterophile preferences. In this talk, I am going to present a model based on random discrete time dynamics of graphs and I am going to compare it to Schelling's model. Our model, inspired by Henry, Prałat and Zhang (2011), also exhibits segregation in almost surely finite time and we obtain upper bounds for the expected time to segregation.