Hasso-Plattner-Institut
Prof. Dr. Tobias Friedrich
 

Research Seminar (Winter Term 2019)

Description

A series of talks on current research and interesting topics in algorithm engineering and theoretical computer science. Everyone is welcome to attend talks. 

If you want to give a talk about a topic in algorithm engineering or theoretical computer science, please write an e-mail to Prof. Dr. Tobias Friedrich or to Ralf Rothenberger.

Announcements

For announcements of talks, subscribe to our mailing list.

Talks (click on title to show abstract)

DateRoom SpeakerTitle
22.10. 15:30G1.E 15/16Martin Schirneck

A unique column combination (UCC) is a small fingerprint of a relational database. Enumerating all inclusion-wise minimal UCCs is therefore a reoccurring task in data profiling. In this talk we explore the close connection between UCCs and the hitting set problem. We show how this connection leads to new enumeration algorithms. In particular, we present a technique that combines a sampling strategy for databases with an input-guided search heuristic to find hitting sets. A preliminary experimental evaluation indicates that our approach is not only competitive with the current state-of-the-art UCC algorithm, but can process databases that were previously out of reach for data profiling.

The talk presents ongoing research with Johann Birnick, Thomas Bläsius, Tobias Friedrich, Julius Lischeid, Felix Naumann, Kitty Meeks, and Thorsten Papenbrock.

30.10. 15:00A-2.2Fabian Sommer

Hyperbolic random graphs have become a popular random graph model over the last decade, as they resemble many real-world graphs. In particular, they have a heterogeneous degree distribution, high clustering and a small-world property. Another property of graphs that distinguishes technical and biological from social networks is degree assortativity – a measure that describes the correlation between degrees of adjacent nodes. Most technical and biological graphs are found to have a significant negative assortativity, while the assortativity of social networks is usually positive. However, hyperbolic random graphs offer no way to control the degree assortativity.

We explore and compare multiple ways to extend hyperbolic random graphs or the similar geometric inhomogeneous random graphs so that the expected assortativity can be controlled while maintaining the properties that made hyperbolic random graphs attractive in the first place. In particular, we describe a model with controllable assortativity that has high clustering, small-world and a heterogeneous degree distribution. We also describe how such graphs can be generated efficiently.

08.11. 13:00A-1.1Master Project Summer 2019

Asymmetric instances for the travelling salesman problem are practically relevant, but still there is a big gap in approximation guarantees between symmetric and asymmetric instances (assuming triangle inequality in either case). We explore this apparent difference between symmetry and asymmetry by generalizing the tree doubling and Christofides algorithm, the two most common approximations for TSP, to parameterized approximations for ATSP. The parameters we consider for the respective parameterizations are upper bounded by the number of asymmetric distances in the given instance, which yields algorithms to efficiently compute constant factor approximations also for moderately asymmetric TSP instances. As generalization of the Christofides algorithm, we derive a parameterized 2.5-approximation, where the parameter is the size of a vertex cover for the subgraph induced by the asymmetric edges. Our generalization of the tree doubling algorithm gives a parameterized 3-approximation, where the parameter is the number of asymmetric edges in a given minimum spanning arborescence. Both algorithms are also stated in the form of additive lossy kernelizations, which allows to combine them with known polynomial time approximations for ATSP. Further, we combine them with a notion of symmetry relaxation which allows to trade approximation guarantee for runtime.

26.11. 13:30H-E.51Maximilian Katzmann

Earlier this year I (somewhat suddenly) realized that, maybe, I can't be a PhD student forever. What should I do afterwards? The two most prominent options are: staying in academia or going into industry. Under the impression that I know what working at university is like, I wanted to learn more about the second option. Therefore, I did an internship as a software engineer at a large tech company.

In this lightweight presentation I talk about how I got in, as well as the project that I worked on. This includes robots, how to answer questions about indoor scenes programmatically, and how that is related to Google's Knowledge Graph. Along the way I will share what I learned about working in industry.

05.12. 15:00H-E.52Marcus Pappik

The phenomenon of residential segregation was captured by Schelling's famous segregation model where two types of agents are placed on a grid and an agent is content with her location if the fraction of her neighbors which have the same type as her is at least \(\tau\), for some 0 < \(\tau\) < 1 . Discontent agents simply swap their location with a randomly chosen other discontent agent or jump to a random empty cell. We analyze a generalized game-theoretic model of Schelling segregation which allows more than two agent types and more general underlying graphs modeling the residential area. For this we show that both aspects heavily influence the dynamic properties and the tractability of finding an optimal placement. We map the boundary of when improving response dynamics (IRD) are guaranteed to converge and we prove several sharp threshold results where guaranteed IRD convergence suddenly turns into a strong non-convergence result: a violation of weak acyclicity. In particular, we show threshold results also for Schelling's original model, which is in contrast to the standard assumption in many empirical papers. In case of convergence we show that IRD find equilibria quickly.

10.12. 13:30G3.E-15/16Katrin Casel

We investigate the locality number, a recently introduced structural parameter for strings, and its connection to two important graph-parameters, cutwidth and pathwidth. These connections allow to show that computing the locality number is NP-hard but fixed parameter tractable (when the locality number or the alphabet size is treated as a parameter), and can be approximated with a logarithmic ratio. As a by-product, we also relate cutwidth via the locality number to pathwidth, which is of independent interest, since it improves the currently best known approximation algorithm for cutwidth. In this talk, I will tell you the tale of how this collection of essentially four simple polynomial-time reductions got accepted to ICALP.

11.12. 11:00H-E.51Ziena Elijazyfer

Primarily, we consider a generalization of the famous Vertex Cover Problem. Given are an edge lengths connected graph and a parameter L. The task is to find a minimum set of vertices R, such that every connected subgraph with a total length of at least L has a vertex from R. The L Vertex Cover Problem was motivated to generate length restricted subgraphs for toll enforcement, which is also a part of the seminar. Summarized, we will see different solution methods for both problems. Especially for linear programming approaches, we introduce general optimization methods, which are also applicable for different optimization problems.

18.12. 11:00H-E.52Hagen Echzell

While a lot of research considers the design of physical or logical networks as a centralized problem, most real-world networks develop by decentralized, uncoordinated and often market-based processes that are driven by independent, selfish agents. In 2003, Fabrikant et al. introduced their “Network Creation Game”, which models the creation of Internet-like networks by selfish agents without a central coordination authority: Node agents pay a fixed price for each edge they build towards other agents to reduce the cumulative hop-distance to the other agents in the resulting graph. Ever since, a lot of research on similar models has been conducted, establishing a whole class of “Network Creation Games”.

To the best of our knowledge, all of them have in common, that the central metric for network quality is based on the distance to other agents, which implies an optimized latency, which is an adequate approach for a lot of network use-cases. Nevertheless, with the rising demand for cloud storage, high quality video streaming and also in physical networks like the power grid, bandwidth/throughput is a crucial metric. Based on the maximum flow problem – a concept first introduced by Harris & Ross as well as Dantzig in 1951 to measure the capacity of railway networks – my master’s thesis presents a novel kind of Network Creation Game that regards the throughput between pairs of agents in a graph as the crucial metric for network quality.

18.12. 11:45H-E.52Louise Molitor

Residential segregation is a wide-spread phenomenon that can be observed in almost every major city. Schelling's famous agent-based model for residential segregation explains how such clusters can form even if all agents are tolerant, i.e., if they agree to live in mixed neighborhoods.

Schelling is an exciting, ongoing topic in the game theoretic community and this year two working groups submitted some papers back and forth. In this talk I would like to give an overview about this line of research with focus on our last paper "Topological Influence and Locality in Swap Schelling Games" which put us into the lead. So the current paper score is 3 to 2 in our favor.

14.01. 13:30H.E-51Ágnes Cseh

The deployment of centralized matching algorithms for efficient exchange of donated kidneys is a major success story of market design. In the standard kidney exchange problem, blood- or tissue-type incompatibility between a patient and a donor makes a transplant impossible. However, novel technological advances on potent immunosuppressant drugs can lift this barrier.

We present a general computational framework to study kidney exchange with immunosuppressants. In contrast to the standard kidney exchange problem, our problem also involves the decision about which patients get immunosuppressants to make them compatible with originally incompatible kidneys. Our main contribution is a set of general algorithms that provide flexibility in terms satisfying meaningful purposes.

Joint work with Haris Aziz.

16.01. 14:30A-1.2Davis Issac

I will talk about connected partitioning of graphs, where we are given a graph and want to partition the vertices into k such that each part induces a connected subgraph and there are also some additional constraints on the size/weight of each part. This problem has applications in the fields of road transport networks, multi-robot coordination, power grid networks, image-processing, clustering etc. We look at a special case, when we know that the input graph is k-connected. In this setting, we will look at a classical theorem from the 70's called the Gyori-Lovasz theorem and a recent vertex-weighted generalization of it. I will give an outline of a proof that we recently (in ICALP '18) gave for the vertex-weighted generalization. The Gyori-Lovasz theorem and its generalization only give existential results (and also exponential algorithms) for connected k-partition of k-connected graphs. It is an interesting open question to find efficient algorithms for the same. I will speak about this and also some other related open problems.

21.01. 15:30D.E-9Gregor Lagodzinski

The notion of a homomorphism between graphs as a map between the associated vertices that preserves edges is not only simple in its definition but also very powerful in its generality. Many important problems and their counting-version can be encoded as homomorphisms to specific target graphs, e.g. vertex-colouring and independent-set. Hence, it is less surprising that the study of graph homomorphisms is a very active research topic both enjoying new results and rediscoveries of old results providing new techniques and insights.

In this talk, we will discuss an instance of the latter in the form of a recent paper by Chen, Curticapean and Dell from 2019. They study the problem of determining the number of homomorphisms to a given target graph and rediscover the same dichotomy as Dyer and Greenhill in 2000. Their approach to generalize the problem even more allows them to avoid a big case distinction and provides additional insights into the problem class. After the discussion of their approach we will transition to the problem of modular counting homomorphisms and to the question whether the approach of Chen et al. can be applied. In particular, the modular version introduces an additional phenomenon as symmetric substructures might get cancelled leading to a loss of structural properties. We will conclude the talk by a short peak into the study of symmetries, an even older study full of history and big names.

22.01. 11:00D.E-10Vanja Doskoč

In the framework of language learning in the limit from text, a learner (a computable function) is successively presented all and only the information of a target language (a computably enumerable subset of the natural numbers). With every new datum, the learner makes a new guess which set it believes to be presented. Learning is successful once the learner sticks to a single, correct description of the target language (explanatory learning). Cautious learners additionally may never output a hypothesis which is a proper subset of a previous guess.

Although being a seemingly natural way of learning, being cautious is known to severely lessen the learning capabilities of explanatory (syntactic) learners. To further understand this loss of learning power, previous work introduced weakened versions of cautious learning and gave first partial results on their relation. In this talk, we provide the audience with a feeling for language learning in the limit from text and discuss the completed picture for cautious learning restrictions.

29.01. 11:00H-E.51Andreas Göbel

Partition functions originate from statistical physics and describe statistical properties of a system of particles defined in terms of local interactions among its particles. An interesting phenomenon in these particle systems is that they undergo a phase transition, depending on the values of a parameter called ``external field'', expressing external conditions such as temperature, pressure, magnetic field etc.. In recent remarkable connections between computer science and statistical physics it has been shown that the approximability of these partition functions undergoes a computational phase transition (i.e. approximable in polynomial time vs NP-hard to approximate) for the exact same values of the external field as the physical phase transition. In this talk we will survey the three main algorithmic techniques for approximating partition functions that appear in the literature and their connections to statistical physics.

04.02. 13:30H-E.51Daniel Stephan, Marcus Wilhelm

(Greedy) Routing: Succinctness vs. Stretch

In many networks routing of messages is important. A common routing strategy relies on routing tables. These guarantee that every message is routed along the shortest path, but require a linear number of bits per vertex.
In the talk I give an overview over alternative approaches that use more succinct routing information, meaning they require less bits per vertex. Most notably, in greedy and local routing package forwarding relies only on local information stored at the vertices. This is achieved at the cost of stretch, i.e. routing may not always proceed along a shortest path between two vertices.


Analysis of a practical algorithm for treewidth

Treewidth and algorithms on tree decompositions are a cornerstone of parametrized algorithms. Still, while fundamental algorithmic results for computing treewidth in FPT time or \(O(2^n)\) have been known for decades, algorithms that perform well in practice were lacking until recently.
This changed with the Parametrized Algorithms and Computational Experiments Challenge (PACE) 2017, in which contestants tried to implement practically good algorithms. One of the submitted algorithms, by Hisao Tamaki, was particularly successful and was honored with a Best Paper Award at ESA 2017.
In this work-in-progress talk (master thesis edition), I want to give an intuitive view of the algorithm and present my previous work and current plans of analyzing its runtime on different graph classes including hyperbolic random graphs.

13.02. 13:30A-1.1Stefan Neubert, Martin Schirneck

Fine-grained complexity gained some traction in the TCS community in recent years. The field aims at pinpointing the "true" complexity of a computational problem. In this survey talk we present four excerpts highlighting prominent tools in the area. We first introduce the concept of fine-grained reductions and use them to transfer the (conjectured) exponential complexity of CNF-Satisfiability to the polynomial Orthogonal Vectors problem. We stay in the polynomial domain to discuss an equivalence class of the All-Pairs Shortest Path problem. Finally, we show how algorithms for the 3-Sum problem can be sped up by fast Fourier transform.

19.02. 11:00A-1.1Katerina Niklanovits (National University of Athens)

We call a graph G an expander, if every relatively small subset of its vertices, has relatively many neighbors in G. Due to the high connectivity properties expanders have, they are closely related to another notion of graph theory, balanced separators. We say that a subset S of the vertices of a graph G is a balanced separator of G, if it partitions G into two relatively equal components.
In this talk, we will briefly see some results on balanced separators and then we will focus on the properties and substructures one can find in expanders. In particular, we will see in detail a result about the existence of high average degree minors in expanders, by Krievelevich and Sudakov.

27.02. 11:00A-1.1Maximilian Katzmann

As one of the classical NP-complete graph problems Vertex Cover is proven to be computationally hard. In contrast, recent experiments suggest that on many real-world networks Vertex Cover can be solved very efficiently. We link these observations to two properties that are observed in many real-world networks, namely a heterogeneous degree distribution and high clustering. To formalize these properties and explain the observed behavior, we analyze how a branch-and-reduce algorithm performs on hyperbolic random graphs, which have become increasingly popular for modeling real-world networks. In fact, we are able to show that the Vertex Cover problem on hyperbolic random graphs can be solved in polynomial time, with high probability.

04.03. 11:00A-1.2Anna Melnichenko

Networks play central roles in a wide variety of social, economic and political interactions. Therefore, understanding real-world network structure and the process of their formation is essential to science. The research effort of Combinatorial Optimization and Operation Research fields led to structural and algorithmic insights about networks. However, all of them assume the existence of a central authority designing the network. In practice, many important infrastructure networks like the physical Internet, the road network and the electricity network are the outcomes of a distributed and decentralized design process by many interacting agents. This observation kindled the study of game-theoretic models for network formation by selfish agents. In these models the network is defined by selfish agents who can locally modify the network structure to maximize their utility.

In this talk, I will give a brief overview of the history and development of Network Creation Games. We will see what questions Game Theory was able to answer and what improvement it achieved towards an understanding of the problem of network formation.

10.03. 11:00D-E.10Warut Suksompong

We study the allocation of indivisible goods that form an undirected graph and quantify the loss of fairness when we impose a constraint that each agent must receive a connected subgraph. Our focus is on well-studied fairness notions including envy-freeness and maximin share fairness. We introduce the price of connectivity to capture the largest gap between the graph-specific and the unconstrained maximin share, and derive bounds on this quantity which are tight for large classes of graphs in the case of two agents and for paths and stars in the general case. In addition, we determine the optimal relaxation of envy-freeness that can be obtained with each graph for two agents, and characterize the set of trees and complete bipartite graphs that always admit an allocation satisfying envy-freeness up to one good (EF1) for three agents. Our work demonstrates several applications of graph-theoretic tools and concepts to fair division problems.

Joint work with Xiaohui Bei, Ayumi Igarashi, and Xinhang Lu.

17.03. 11:00A-1.1Philipp Fischbeck

In an attempt to better understand network structures, several network models have been introduced in the recent decades. However, the relevance and usefulness of such models greatly depends on how closely the generated networks resemble their real-world counterparts, with respect to some structural properties. In this talk, I present challenges and insights from using the technique of "posterior predictive checks". We fit the parameters of a network model to match real-world networks, generate networks with the fitted parameters, and then look for systematic discrepancies between the two resulting network classes (real and generated). This allows for finding weaknesses and possible improvements of the model.

26.03. 11:00A-2.2Marcus Pappik

Abstract polymer models are combinatorial structures that consist of a set of weighted objects, called polymers, together with a reflexive and symmetric incompatibility relation that describes which polymers cannot occur together.
In this thesis we present the first Markov chain approach for sampling from the Gibbs distribution of abstract polymer models. Known Markov chains for polymer models from vertex and edge spin systems can be seen as special cases of this polymer Markov chain. We introduce the concept of polymer cliques and propose a generalized polymer mixing condition as a way to guarantee rapid mixing of our chain. The form of this condition is similar to conditions from cluster expansion approaches, such as the Kotecký-Preiss condition and the Fernández-Procacci condition, but it is less restrictive.
To obtain an efficient sampling scheme for the Gibbs distribution from our polymer Markov chain, we prove that it suffices to draw each step of the chain approximately according to its transition probabilities. As one way to approximate each transition of the chain, we suggest to truncate each polymer clique based on some notion of size. We introduce the clique truncation condition as a general tool to determine the maximum size of polymers that we have to consider for the steps of the chain.
We prove that our sampling scheme can be used to construct an FPRAS for the partition function. By this, we obtain the first Markov chain Monte Carlo method that works for abstract polymer models in a similar regime as cluster expansion approaches and beyond, while avoiding their complicated analytical assumptions and arguments. Further, we illustrate how our approach can be applied to algorithmic problems like the hard-core model on bipartite expander graphs and the perfect matching polynomial to obtain new trade-offs between runtime and weight bounds. We emphasize that similar results can be obtained for a variety of other applications.

09.04. 14:00virtualMartin Krejca

In the world of nature-inspired heuristic optimization, two general approaches exist: evolutionary algorithms (EAs) maintain a set of solutions, which is modified iteratively through variation operators, such as mutation. In contrast, estimation-of-distribution algorithms (EDAs) maintain a probabilistic model of the entire search space, which is iteratively adjusted with respect to samples drawn from that model. Until recently, theoretical results comparing these two approaches showed that EDAs perform usually as well as EAs – sometimes they outperformed them drastically. Then, at FOGA ’19, Lehre and Nguyen introduced the benchmark function DeceptiveLeadingBlocks (DLB), which is mildly deceptive and uses dependencies among its variables (known as epistasis). They proved that the EDA named univariate marginal distribution algorithm (UMDA) has an exponential run time on DLB, whereas all common EAs have a polynomial (cubic) run time.

In this talk, we show that the exponential run time of the UMDA is an artifact of a suboptimal parameter choice. In fact, when choosing the parameters adequately, the UMDA has a quasi-squared run time on DLB, making it thus even more efficient than the EAs (although, so far, only upper bounds have been proven). We discover why DLB is easy for the UMDA, and we discuss how to choose your parameters in general for EDAs if you don’t know what your landscape looks like.