11.10. 11:00  HE.52  Martin Schirneck  Efficiently Enumerating Hitting Sets of Hypergraphs Arising in Data Profiling
A reoccurring task in the design and profiling of relational data is the discovery of hidden dependencies between attributes.
Enumerating those dependencies is equivalent to the transversal hypergraph problem.
We devise a backtracking algorithm for the enumeration of inclusionwise minimal hitting sets.
It achieves polynomial delay on hypergraphs for which the size of the largest minimal transversal is bounded.
The algorithm solves the extension problem for minimal hitting sets as a subroutine.
Although this problem is NPcomplete and W[3]complete in general,
we show that a careful implementation of the extension oracle can help avoiding the worst case on hypergraphs arising in the profiling of realworld databases,
leading to significant speedups in practice.
This is joint work with Thomas Bläsius, Tobias Friedrich, Julius Lischeid, and Kitty Meeks.

14.11. 16:00  G3E.15/16  Patrick Schäfer  Time Series Analytics
A time series (TS) is a collection of values sequentially ordered in time.
TS emerge in many scientific and commercial applications.
One driving force behind their rising importance is the sharply increasing usage of sensors for automatic and high resolution monitoring in domains such as smart homes,
machine surveillance, land cover mapping using satellite images, smart grids, weather observations, wind energy forecasting, mobility tracking, etc.
The main purpose of TS analytics is to extract meaningful knowledge based on the shape of the TS data, which implies exploiting temporal order.
Though humans have some natural capacity to solve the identification of perceptually similar but not mathematically identical TS,
this remains a complex problem for computers.
In this talk, I will provide an overview of the techniques applied for TS analytics, where approaches can broadly be divided into
Whole Series, Shapelets, Dictionary (BagofPatterns), Ensembles, and DeepLearning.
With a focus on TS classification, I will address how stateoftheart approaches represent the fundamental shapes of a TS and
how these features are used. The talk ends with an experimental evaluation of the stateoftheart classification approaches.

15.11. 15:00  A2.1  Marianne Thieffry  Efficient Generation of Geometric Inhomogeneous Random Graphs
Many realworld networks share common properties like power law degree distribution, small diameters, and large clustering coefficients. Graph models used in research try to mimic realworld networks by providing similar properties. Hyperbolic random graphs in particular have recently gained popularity in network research because they implement all mentioned properties. However, generators for hyperbolic random graphs only support a limited number of dimensions for the underlying geometry, resulting in tight links between average node degree and clustering coefficients. A generalization of hyperbolic random graphs are geometric inhomogeneous random graphs (GIRGs), pro posed by Bringmann et al. They also present a sampling algorithm to generate GIRGs that has an expected running time of O(n) that supports an arbitrary number of dimensions in the underlying geometry.
In this Thesis, we explain GIRGs and the sampling algorithm as proposed by Bringmann et al. We present the first implementation of a graph generator with underlying geometry that supports an an arbitrary numbers of dimensions. We suggest some enhancements to the sampling algorithm proposed by Bringmann et al. to boost the runtime of the generator by 20%. We also theoretically analyze configurations for the sampling algorithm to give the user of our tool more direct control of the output. Finally, we use our implementation to empirically validate theoretical findings on GIRGs.

15.11. 16:00  A2.1  Sven Ihde  Wireless Network Creation Games
Networks are a fundamental part of our society. The recent development of devices shows that more and more networks are experiencing a shift from wirelined to wireless adhoc networks. Such networks exist in numerous forms and are from a multitude of different domains. By using the tools and explanatory power of Game Theory, we explore the creation and properties of such networks.
In this work we look at Wireless Network Creation Games, where selfish agents build an adhoc network. In contrast to most current research on this topic, which only tries to minimize the energy consumption of an agent, we also analyse the quality of an agent’s connection by creating a tradeoff between energy consumption and experienced service quality. Because of that we define multiple games by additionally imposing a parameter similar to the hop distance of internet networking, which is used to measure the realtime performance and stability of connections in a network. In these games we study Nash Equilibria, as well as the resulting dynamics created by the behaviour of selfish agents. We also provide an approximation algorithm that efficiently converges and provide empirical evidence to show that the resulting states of the Wireless Network Creation Game are close to optimal. Furthermore, we analyse the social welfare of these networks to illustrate how the selfish behaviour of agents influences the wellbeing of society.

23.11. 11:00  A1.1  Katrin Casel  The Minimal Extension of a Partial Solution
The very general problem of determining the quality of a given partial solution occurs basically in every algorithmic approach which computes solutions in some sense gradually. Pruning searchtrees, proving approximation guarantees or the efficiency of enumeration strategies usually requires a suitable way to decide if a partial solution is a reasonable candidate to pursue. Consider for example the classical concept of minimal dominating sets for graphs. The task of finding a maximum cardinality minimal dominating set (or an approximation of it) as well as enumerating all minimal dominating sets naturally leads to solving the following extension problem: Given a graph G=(V,E) and a subset P of V, does there exists a minimal dominating set S for G which contains P.
In an attempt to study the nature of such extension tasks, we propose a general, partialorder based framework to express a broad class of what we refer to as extension problems. In essence, we consider optimisation problems in NPO with an additionally specified set of presolutions (including the solutions) and a partial order on those. We consider a number of specific problems which can be expressed in this framework. Possibly contradicting intuition, these problems tend to be NPhard, even for problems where the underlying optimisation problem can be solved in polynomial time. This raises the question of how fixing a presolution causes this increase in difficulty. In this regard, we study the parameterised complexity of extension problems with respect to parameters related to the presolution. We further discuss relaxation of the extension constraint asking only for a solution S which preserves as much of P as possible. These considerations yield some insight into the various difficult aspects of extension problems. 
14.12. 11:00  A1.1  Thomas Sauerwald  On coalescence time in graphsWhen is coalescing as fast as meeting?
Coalescing random walks is a fundamental stochastic process, where a set of particles perform independent discretetime random walks on an undirected graph. Whenever two or more particles meet at a given node, they merge and continue as a single random walk. The coalescence time is defined as the expected time until only one particle remains, starting from one particle at every node. Despite recent progress the coalescence time for graphs such as binary trees, ddimensional tori, hypercubes and more generally, vertextransitive graphs, remains unresolved. We provide a powerful toolkit that results in tight bounds for various topologies including the aforementioned ones. The meeting time is defined as the worstcase expected time required for two random walks to arrive at the same node at the same time. As a general result, we establish that for graphs whose meeting time is only marginally larger than the mixing time (a factor of log^2 n), the coalescence time of n random walks equals the meeting time up to constant factors. This upper bound is complemented by the construction of a graph family demonstrating that this result is the best possible up to constant factors. For almostregular graphs, we bound the coalescence time by the hitting time, resolving the discretetime variant of a conjecture by Aldous for this class of graphs. Finally, we prove that for any graph the coalescence time is bounded by O(n^3) (which is tight for the Barbell graph); surprisingly even such a basic question about the coalescing time was not answered before this work. By duality, our results give bounds on the voter model and therefore give bounds on the consensus time in arbitrary undirected graphs. We also establish a new bound on the hitting time and cover time of regular graphs, improving and tightening previous results by Broder and Karlin, as well as those by Aldous and Fill.

07.01. 15:15  H2.58  Manuel Penschuck  Generating Practical Random Hyperbolic Graphs in NearLinear Time and with SubLinear Memory
Random graph models, originally conceived to study the structure of networks and the emergence of their properties, have become an indispensable tool for
experimental algorithmics. Amongst them, hyperbolic random graphs form a wellaccepted family, yielding realistic complex networks while being both mathematically
and algorithmically tractable. We introduce two generators MemGen and HyperGen for the \(G_{\alpha,C}(n)\) model, which distributes n random points within a hyperbolic
plane and produces \(m=n\cdot d/2\) undirected edges for all point pairs close by; the expected average degree d and exponent \(2\cdot \alpha+1\) of the powerlaw degree distribution
are controlled by \(\alpha>1/2\) and \(C\). Both algorithms emit a stream of edges which they do not have to store. MemGen keeps \(\mathcal{O}(n)\) items in internal memory and has a
time complexity of \(\mathcal{O}(n\cdot\log(\log n) + m)\), which is optimal for networks with an average degree of \(d=\Omega(\log(\log n))\). For realistic values of \(d=o(n / \log^{1/\alpha}(n))\),
HyperGen reduces the memory footprint to \(\mathcal{O}([n^{1\alpha}\cdot d^{\alpha} + \log(n)]\cdot \log(n))\). In an experimental evaluation, we compare HyperGen with four generators among which
it is consistently the fastest. For small \(d=10\) we measure a speedup of 4.0 compared to the fastest publicly available generator increasing to 29.6 for \(d=1000\).
On commodity hardware, HyperGen produces \(3.7e8\) edges per second for graphs with \(1e6 < m < 1e12\) and \(\alpha=1\), utilising less than 600MB of RAM. We demonstrate nearly
linear scalability on an Intel Xeon Phi.

21.01. 11:00  HE.52  Francesco Quinzan  Greedy Maximization of Functions with Bounded Curvature Under Partition Matroid Constraints
We investigate the performance of a deterministic GREEDY algorithm for the problem of maximizing functions under a partition matroid constraint.
We consider nonmonotone submodular functions and monotone subadditive functions. Even though constrained maximization problems of monotone submodular functions
have been extensively studied, little is known about greedy maximization of nonmonotone submodular functions or monotone subadditive functions.
We give approximation guarantees for GREEDY on these problems, in terms of the curvature. We find that this simple heuristic yields a strong approximation guarantee on
a broad class of functions. We discuss the applicability of our results to three realworld problems: Maximizing the determinant function of a positive semidefinite
matrix, and related problems such as the maximum entropy sampling problem, the constrained maximum cut problem on directed graphs, and combinatorial auction games.
We conclude that GREEDY is wellsuited to approach these problems. Overall, we present evidence to support the idea that, when dealing with constrained maximization
problems with bounded curvature, one needs not search for (approximate) monotonicity to get good approximate solutions.

01.02. 11:00  A1.1  Anna Melnichenko  Geometric Network Creation Games
Network Creation Games are a wellknown approach for explaining and analyzing the structure, quality and dynamics of realworld networks like the Internet and other infrastructure networks which evolved via the interaction of selfish agents without a central authority. In these games selfish agents which correspond to nodes in a network strategically buy incident edges to improve their centrality. However, past research on these games has only considered the creation of networks with unitweight edges. In practice, e.g. when constructing a fiberoptic network, the choice of which nodes to connect and also the induced price for a link crucially depends on the distance between the involved nodes and such settings can be modeled via edgeweighted graphs.
We incorporate arbitrary edge weights by generalizing the wellknown model by Fabrikant et al. to edgeweighted host graphs and focus on the geometric setting where the weights are induced by the distances in some metric space.
In stark contrast to the stateoftheart for the unitweight version, where the Price of Anarchy is conjectured to be constant and where resolving this is a major open problem, we prove a tight nonconstant bound on the Price of Anarchy for the metric version and a slightly weaker upper bound for the nonmetric case. Moreover, we analyze the existence of equilibria, the computational hardness and the game dynamics for several natural metrics. The model we propose can be seen as the gametheoretic analogue of a variant of the classical Network Design Problem. Thus, lowcost equilibria of our game correspond to decentralized and stable approximations of the optimum network design.

01.02. 13:30  A1.1  Algorithm Engineering group  SODA talks
Each member of the Algorithm Engineering group selected one paper among the ones accepted at the conference ACMSIAM Symposium on Discrete Algorithms (SODA19) for a 5 minutes presentation to the audience.

08.02. 13:30  A1.1  Timo Kötzing  Taylor Approximation, Potential Functions and Stochastic Drift
In my first year at the university I learned about Taylor approximations, that is, about approximating an (infinitely often differentiable) function near a given point. I had no clue what that could be good for and soon forgot the details  until now. Benjamin Doerr brought to my attention a nifty way of using it as an intuition for what could be good potential functions to estimate stochastic drift.
It turns out that Taylor approximations cannot do only that, they can also be used to make estimates of potential functions rigorous and to ease the computation of stochastic drift.
In my talk I want to showcase this tool which should probably be in the toolbox of every mathematician  and don't worry, the boring part of applying this tool can be carried out by wolfram alpha.

21.02. 13:30  A1.1  Davide Bilò  Almost Optimal Algorithms for DiameterOptimally Augmenting Trees
We consider the problem of augmenting an nvertex tree with one shortcut in order to minimize the diameter of the resulting graph.
The tree is embedded in an unknown space and we have access to an oracle that, when queried on a pair of vertices u and v, reports the
weight of the shortcut \((u, v)\) in constant time. Previously, the problem was solved in \(\mathcal{O}(n^2 \log^3 n)\) time for
general weights [Oh and Ahn, ISAAC 2016], in \(\mathcal{O}(n^2 \log n)\) time for trees embedded in a metric space [Große et al., arXiv:1607.05547],
and in \(\mathcal{O}(n \log n)\) time for paths embedded in a metric space [Wang, WADS 2017]. Furthermore, a \((1 + \varepsilon)\)approximation algorithm running
in \(\mathcal{O}(n + 1/\varepsilon^3)\) has been designed for paths embedded in \(\mathbb{R}^d\), for constant values of \(d\) [Große et al., ICALP 2015].
The contribution of this paper is twofold: we address the problem for trees (not only paths) and we also improve upon all known results.
More precisely, we design a timeoptimal \(\mathcal{O}(n^2)\) time algorithm for general weights. Moreover, for trees embedded in a metric space,
we design (i) an exact \(\mathcal{O}(n \log n)\) time algorithm and (ii) a \((1 + \varepsilon)\)approximation algorithm that runs in
\(\mathcal{O}( n + (\varepsilon^{−1}) \log(\varepsilon^{−1}))\) time.

21.02. 16:00  Hs 1  Oliver Brock  Artificial Intelligence: Does It Require A Body?
Intelligence is of great importance in our society. But intelligence also remains mysterious and is notoriously difficult to reproduce in technological artifacts. So far, every attempt to produce an intelligent machine—for example: Deep Blue, Watson, or AlphaGo—has brought about important technological progress but has failed to significantly advance our understanding of biological intelligence. Why does there seem to be such a wide gap between artificial intelligence and biological intelligence? Maybe the artificial intelligences are missing something. Maybe they are missing a body? And maybe therefore our next attempt to build a truly intelligent machine should take the form of a robot? I will argue that this is a good idea and show attempts of moving towards this goal. However, rather than creating an apocalyptic robotic overlord who will end all of humanity, I will advocate to first go after a much more humble (and maybe more realistic) objective, namely, to attempt to replicate the surprising abilities of cockatoos.

22.02. 13:30  A2.1  Jens Fischer  Randomized Network Segregation
Segregation based on race, political and religious views or social norms is a reoccurring phenomenon throughout history. Thomas Schelling proposed in 1969 a model for this phenomenon accounting for the preferences of each individual regarding their neighbors. He showed segregation in his model both for agents having heterophob or heterophile preferences. In this talk, I am going to present a model based on random discrete time dynamics of graphs and I am going to compare it to Schelling's model. Our model, inspired by Henry, Prałat and Zhang (2011), also exhibits segregation in almost surely finite time and we obtain upper bounds for the expected time to segregation.

05.03. 13:30  A1.1  Martin Krejca  D (–) Best Programming Language
When I was a child, I wanted to program video games. In fact, this was one of the reasons why I decided to study computer science instead of mathematics. Due to my early ambitions, I regularly came into contact with C++, as it was arguably the biggest player out there. However, it was always a doubleedged sword to me. On the one hand, I admired it for all of its power and features. On the other hand, I always had problems with anything I wanted to do, and I always felt insecure about what I was doing. In the end, I thought that I was too dumb to grasp the greatness of C++. Nonetheless, I always returned, hoping that I had matured and was now worthy of wielding the power that came with the best programming language on earth.
When I started my bachelor’s, I was introduced to Java. This made me question my world view multiple times. I actually learned how to program?! Apparently, doing a bachelor in computer science was more handholding than I thought. Also, programming could be quite easy?! Everyone who has worked with Java (or Python) probably knows how convenient it is to make progress. For most of the time, things work pretty well. And if they don’t, one usually knows where to start looking for errors.
Despite my good experience with Java, I longed for more. I was not very happy that everything in Java needed to be a class, and the execution speed of C++ was still better. Last, generic programming in Java is poorly implemented. I recall how my Java lecturer (who worked on heavily improving the Java VM) told us how much of a mess generic programming in Java was and how to use certain hacks in order to make it bearable. This was the point when I lost interest in Java. I saw a clear barrier built into the language that I could not surpass. In contrast to that, C++ was still out there. Waiting with all of its cool features to be explored and used. Limitless possibilities! Thus, I went back to looking at C++.
By now, I had learned many different programming concepts. I surely should be able to easily handle C++ now. But, boy, was I wrong! I basically struggled with the same things I had always struggled. Some things maybe made more sense. However, the way to go was always this long, winding road with tons of boulders placed brilliantly around you. Have fun escaping that! Nonetheless, people have used C++ for ages (relatively speaking). This was probably the price to pay for efficiency. Hence, I sat down and forced myself, again, through C++, this time trying really hard to pay attention to all of the details [1]. Then everything changed!
I was talking to a professor of one of the theory groups. He told me that a PhD student of his was planning to give a lecture on library design and wanted to present a new programming language that his group started using. Of course, I was intrigued and attended the class. The programming language used was called D – a lame pun on C++. I was not amazed. However, I got hooked very quickly with every new feature I was taught. D was very similar to Java, but it did not force you down the OOP road. In addition to that, D had learned from C++ and only tried to rely on backward compatibility with C if it was necessary. Last, it was designed with features in mind that could be considered ugly hacks in C++. D just hit all the right notes for me. It was C++ done right!
In my talk, I want to share my thoughts on D and give a broad overview over the language. I do not want to convince you that D is the best language out there (albeit I may be bashing C++ from time to time). My intention is to present you with an alternative you may want to consider in the future. I personally think that D has the potential of replacing C++ and Python for certain use cases.
While I am fully aware of the downsides of D, I will be focusing on the positives. However, you are motivated to challenge me anytime during the talk and point out bad design choices or questionable decisions. I would be happy to have some controversy going on. But be aware that I am not a programmer. I think that I have a rough idea of how idiomatic D looks like. However, I cannot claim the same for me about C++. When I looked at C++, smart pointers weren’t a thing and include guards asked for the most creative names imaginable. I would like to hear how things have changed and how C++ might be able to compete with D for some of the features I present. I am looking forward to your input.
[1] For example, I debated with myself for hours where to put the asterisk for pointers – a problem that probably many C++ programmers can relate to. I really wanted to put it directly after the type. Bjarne Stroustrup discusses this issue and says that both choices are fine (http://www.stroustrup.com/bs_faq2.html#whitespace), while he prefers his asterisks next to the type. However, looking at more complex examples (I am a theoretician after all) reveals that the only correct answer is that the asterisk should go immediately in front of the name, which I consider tremendously ugly. This is the only interpretation that satisfyingly explains how to declare more complex pointers, and it gets quite evident when looking at function pointers in namespaces. Thus, while one could argue that the original idea stems from C, which is thus to blame, namespaces are a C++ feature. Hence, Bjarne sort of overlooked this little detail. Admittedly, one is encouraged to not use the language way of declaring complex pointers but hacks instead. However, this only shows how certain features are misdesigned but will not be changed due to backward compatibility. Sorry for the rant.
