
Bläsius, Thomas; Friedrich, Tobias; Krohmer, Anton; Laue, Sören Efficient Embedding of ScaleFree Graphs in the Hyperbolic Plane. Transactions on Networking 2018
Hyperbolic geometry appears to be intrinsic in many large real networks. We construct and implement a new maximum likelihood estimation algorithm that embeds scalefree graphs in the hyperbolic space. All previous approaches of similar embedding algorithms require at least a quadratic runtime. Our algorithm achieves quasilinear runtime, which makes it the first algorithm that can embed networks with hundreds of thousands of nodes in less than one hour. We demonstrate the performance of our algorithm on artificial and real networks. In all typical metrics like Loglikelihood and greedy routing our algorithm discovers embeddings that are very close to the ground truth.

Doerr, Benjamin; Fischbeck, Philipp; Frahnow, Clemens; Friedrich, Tobias; Kötzing, Timo; Schirneck, Martin Island Models Meet Rumor Spreading. Algorithmica 2018
Island models in evolutionary computation solve problems by a careful interplay of independently running evolutionary algorithms on the island and an exchange of good solutions between the islands. In this work, we conduct rigorous run time analyses for such island models trying to simultaneously obtain good run times and low communication effort. We improve the existing upper bounds for both measures (i) by improving the run time bounds via a careful analysis, (ii) by balancing individual computation and communication in a more appropriate manner, and (iii) by replacing the usual communicatewithall approach with randomized rumor spreading. In the latter, each island contacts a randomly chosen neighbor. This epidemic communication paradigm is known to lead to very fast and robust information dissemination in many applications. Our results concern island models running simple (1+1) evolutionary algorithms to optimize the classic test functions OneMax and LeadingOnes. We investigate binary trees, ddimensional tori, and complete graphs as communication topologies.

Bringmann, Karl; Friedrich, Tobias; Krohmer, Anton Deanonymization of Heterogeneous Random Graphs in Quasilinear Time. Algorithmica 2018: 131
There are hundreds of online social networks with altogether billions of users. Many such networks publicly release structural information, with all personal information removed. Empirical studies have shown, however, that this provides a false sense of privacy  it is possible to identify almost all users that appear in two such anonymized network as long as a few initial mappings are known. We analyze this problem theoretically by reconciling two versions of an artificial powerlaw network arising from independent subsampling of vertices and edges. We present a new algorithm that identifies most vertices and makes no wrong identifications with high probability. The number of vertices matched is shown to be asymptotically optimal. For an nvertex graph, our algorithm uses \(n^\varepsilon\) seed nodes (for an arbitrarily small \(\varepsilon\)) and runs in quasilinear time. This improves previous theoretical results which need \(\Theta(n)\) seed nodes and have runtimes of order \(n^{1+\Omega(1)}\). Additionally, the applicability of our algorithm is studied experimentally on different networks.

Bläsius, Thomas; Karrer, Annette; Rutter, Ignaz Simultaneous Embedding: Edge Orderings, Relative Positions, Cutvertices. Algorithmica 2018: 12141277
A simultaneous embedding (with fixed edges) of two graphs G1 and G2 with common graph G = G1 ∩ G2 is a pair of planar drawings of G1 and G2 that coincide on G. It is an open question whether there is a polynomialtime algorithm that decides whether two graphs admit a simultaneous embedding (problem SEFE). In this paper, we present two results. First, a set of three lineartime preprocessing algorithms that remove certain substructures from a given SEFE instance, producing a set of equivalent Sefe instances without such substructures. The structures we can remove are (1) cutvertices of the union graph G∪ = G1 ∪ G2 , (2) most separating pairs of G∪, and (3) connected components of G that are biconnected but not a cycle. Second, we give an O(n³)time algorithm solving Sefe for instances with the following restriction. Let u be a pole of a Pnode µ in the SPQRtree of a block of G1 or G2. Then at most three virtual edges of µ may contain common edges incident to u. All algorithms extend to the sunflower case, i.e., to the case of more than two graphs pairwise intersecting in the same common graph.

Kötzing, Timo; Sudholt, Dirk Preface to the Special Issue on Theory of Genetic and Evolutionary Computation. Algorithmica 2018: 15751578

Doerr, Benjamin; Doerr, Carola; Kötzing, Timo Static and SelfAdjusting Mutation Strengths for Multivalued Decision Variables. Algorithmica 2018: 17321768
The most common representation in evolutionary computation are bit strings. With very little theoretical work existing on how to use evolutionary algorithms for decision variables taking more than two values, we study the run time of simple evolutionary algorithms on some OneMaxlike functions defined over \(\Omega={0,1,\dots,r−1}^n\). We observe a crucial difference in how we extend the onebitflip and standardbit mutation operators to the multivalued domain. While it is natural to modify a random position of the string or select each position of the solution vector for modification independently with probability \(1/n\), there are various ways to then change such a position. If we change each selected position to a random value different from the original one, we obtain an expected run time of \(\Theta(n r \log n)\). If we change each selected position by \(+1\) or \(−1\) (random choice), the optimization time reduces to \(\Theta(n r + n \log n)\). If we use a random mutation strength \(i \in {0,1,\dots,r−1}\) with probability inversely proportional to \(i\) and change the selected position by \(+i\) or \(−i\) (random choice), then the optimization time becomes \(\Theta(n \log(r)(\log n + \log r))\), which is asymptotically faster than the previous if \(r = \omega(\log(n)\log\log(n))\). Interestingly, a better expected performance can be achieved with a selfadjusting mutation strength that is based on the success of previous iterations. For the mutation operator that modifies a randomly chosen position, we show that the selfadjusting mutation strength yields an expected optimization time of \(\Theta(n(\log n + \log r))\), which is best possible among all dynamic mutation strengths. In our proofs, we use a new multiplicative drift theorem for computing lower bounds, which is not restricted to processes that move only towards the target.

Bläsius, Thomas; Stumpf, Peter; Ueckerdt, Torsten Local and Union Boxicity. Discrete Mathematics 2018: 1307  1315
The boxicity box(H) of a graph H is the smallest integer d such that H is the intersection of d interval graphs, or equivalently, that H is the intersection graph of axisaligned boxes in R^d. These intersection representations can be interpreted as covering representations of the complement H^c of H with cointerval graphs, that is, complements of interval graphs. We follow the recent framework of global, local and folded covering numbers (Knauer and Ueckerdt, 2016) to define two new parameters: the local boxicity box_l(H) and the union boxicity box_u(H) of H. The union boxicity of H is the smallest d such that H^c can be covered with d vertexdisjoint unions of cointerval graphs, while the local boxicity of H is the smallest d such that H^c can be covered with cointerval graphs, at most d at every vertex. We show that for every graph H we have box_l(H) ≤ box_u(H) ≤ box(H) and that each of these inequalities can be arbitrarily far apart. Moreover, we show that local and union boxicity are also characterized by intersection representations of appropriate axisaligned boxes in R^d . We demonstrate with a few striking examples, that in a sense, the local boxicity is a better indication for the complexity of a graph, than the classical boxicity.

Dang, DucCuong; Friedrich, Tobias; Kötzing, Timo; Krejca, Martin S.; Lehre, Per Kristian; Oliveto, Pietro S.; Sudholt, Dirk; Sutton, Andrew M. Escaping Local Optima Using Crossover with Emergent Diversity. IEEE Transactions on Evolutionary Computation 2018
Population diversity is essential for the effective use of any crossover operator. We compare seven commonly used diversity mechanisms and prove rigorous run time bounds for the \((\mu+1)\) GA using uniform crossover on the fitness function \(Jump_k\). All previous results in this context only hold for unrealistically low crossover probability \(p_c=O(k/n)\), while we give analyses for the setting of constant \(p_c < 1\) in all but one case. Our bounds show a dependence on the problem size \(n\), the jump length \(k\), the population size \(\mu\), and the crossover probability \(p_c\). For the typical case of constant \(k > 2\) and constant \(p_c\), we can compare the resulting expected optimisation times for different diversity mechanisms assuming an optimal choice of \(\mu\): \(O(n^{k1})\) for duplicate elimination/minimisation, \(O(n^2 \log n)\) for maximising the convex hull, \(O(n \log n)\) for det. crowding (assuming \(p_c = k/n\)), \(O(n \log n)\) for maximising the Hamming distance, \(O(n \log n)\) for fitness sharing, \(O(n \log n)\) for the singlereceiver island model. This proves a sizeable advantage of all variants of the \((\mu+1)\) GA compared to the (1+1) EA, which requires \(\Theta(n^k)\). In a short empirical study we confirm that the asymptotic differences can also be observed experimentally.

Baum, Moritz; Bläsius, Thomas; Gemsa, Andreas; Rutter, Ignaz; Wegner, Franziska Scalable Exact Visualization of Isocontours in Road Networks via MinimumLink Paths. Journal of Computational Geometry 2018: 2470
Isocontours in road networks represent the area that is reachable from a source within a given resource limit. We study the problem of computing accurate isocontours in realistic, largescale networks. We propose isocontours represented by polygons with minimum number of segments that separate reachable and unreachable components of the network. Since the resulting problem is not known to be solvable in polynomial time, we introduce several heuristics that run in (almost) linear time and are simple enough to be implemented in practice. A key ingredient is a new practical lineartime algorithm for minimumlink paths in simple polygons. Experiments in a challenging realistic setting show excellent performance of our algorithms in practice, computing nearoptimal solutions in a few milliseconds on average, even for long ranges.

Bläsius, Thomas; Friedrich, Tobias; Katzmann, Maximilian; Krohmer, Anton Hyperbolic Embeddings for NearOptimal Greedy Routing. Algorithm Engineering and Experiments (ALENEX) 2018: 199208
Greedy routing computes paths between nodes in a network by successively moving to the neighbor closest to the target with respect to coordinates given by an embedding into some metric space. Its advantage is that only local information is used for routing decisions. We present different algorithms for generating graph embeddings into the hyperbolic plane that are well suited for greedy routing. In particular our embeddings guarantee that greedy routing always succeeds in reaching the target and we try to minimize the lengths of the resulting greedy paths. We evaluate our algorithm on multiple generated and real wold networks. For networks that are generally assumed to have a hidden underlying hyperbolic geometry, such as the Internet graph, we achieve nearoptimal results, i.e., the resulting greedy paths are only slightly longer than the corresponding shortest paths. In the case of the Internet graph, they are only \(6\%\) longer when using our best algorithm, which greatly improves upon the previous best known embedding, whose creation required substantial manual intervention.

Doerr, Benjamin; Krejca, Martin S. Significancebased EstimationofDistribution Algorithms. Genetic and Evolutionary Computation Conference (GECCO) 2018
Estimationofdistribution algorithms (EDAs) are randomized search heuristics that maintain a stochastic model of the solution space. This model is updated from iteration to iteration based on the quality of the solutions sampled according to the model. As previous works show, this shortterm perspective can lead to erratic updates of the model, in particular, to bitfrequencies approaching a random boundary value. This can lead to significant performance losses. In order to overcome this problem, we propose a new EDA that takes into account a longer history of samples and updates its model only with respect to information which it classifies as statistically significant. We prove that this significancebased compact genetic algorithm (sigcGA) optimizes the common benchmark functions OneMax and LeadingOnes both in \(O(n\log n)\) time, a result shown for no other EDA or evolutionary algorithm so far. For the recently proposed scGA – an EDA that tries to prevent erratic model updates by imposing a bias to the uniformly distributed model – we prove that it optimizes OneMax only in a time exponential in the hypothetical population size \(1/\rho\).

Friedrich, Tobias; Quinzan, Francesco; Wagner, Markus Escaping Large Deceptive Basins of Attraction with Heavy Mutation Operators. Genetic and Evolutionary Computation Conference (GECCO) 2018
In many Evolutionary Algorithms (EAs), a parameter that needs to be tuned is that of the mutation rate, which determines the probability for each decision variable to be mutated. Typically, this rate is set to 1/n for the duration of the optimization, where n is the number of decision variables. This setting has the appeal that the expected number of mutated variables per iteration is one. In a recent theoretical study, it was proposed to sample the number of mutated variables from a powerlaw distribution. This results into a significantly higher probability on larger numbers of mutations, so that escaping local optima becomes more probable. In this paper, we propose another class of nonuniform mutation rates. We study the benefits of this operator in terms of averagecase blackbox complexity analysis and experimental comparison. We consider both pseudoBoolean artificial landscapes and combinatorial problems (the Minimum Vertex Cover and the Maximum Cut). We observe that our nonuniform mutation rates significantly outperform the standard choices, when dealing with landscapes that exhibit large deceptive basins of attraction.

Friedrich, Tobias; Kötzing, Timo; Quinzan, Francesco; Sutton, Andrew M. Improving the Run Time of the (1+1) Evolutionary Algorithm with Luby Sequences. Genetic and Evolutionary Computation Conference (GECCO) 2018
In the context of black box optimization, the most common way to handle deceptive attractors is to periodically restart the algorithm. In this paper, we explore the benefits of combining the simple \((1+1)\) Evolutionary Algorithm (EA) with the Luby Universal Strategy  the \((1+1)~EA_{\mathcal{U}}\), a metaheuristic that does not require parameter tuning. We first consider two artificial pseudoBoolean landscapes, on which the \((1+1)~EA\) exhibits exponential run time. We prove that the \((1+1)~EA_{\mathcal{U}}\) has polynomial run time on both instances. We then consider the Minimum Vertex Cover on two classes of graphs. Again, the \((1+1)~EA\) yields exponential run time on those instances, and the \((1+1)~EA_{\mathcal{U}}\) finds the global optimum in polynomial time. We conclude by studying the Makespan Scheduling. We consider an instance on which the \((1+1)~EA\) does not find a \((4/3\epsilon)\)approximation in polynomial time, and we show that the \((1+1)~EA_{\mathcal{U}}\) reaches a \((4/3\epsilon)\)approximation in polynomial time. We then prove that the \((1+1)~EA_{\mathcal{U}}\) serves as an Efficient Polynomialtime Approximation Scheme (EPTAS) for the Partition Problem, for a \((1+\epsilon)\)approximation with \(\epsilon > 4/n\).

Gao, Wanru; Friedrich, Tobias; Neumann, Frank; Hercher, Christian Randomized Greedy Algorithms for Covering Problems. Genetic and Evolutionary Computation Conference (GECCO) 2018
Greedy algorithms provide a fast and often also effective solution to many combinatorial optimization problems. However, it is well known that they sometimes lead to low quality solutions on certain instances. In this paper, we explore the use of randomness in greedy algorithms for the minimum vertex cover and dominating set problem and compare the resulting performance against their deterministic counterpart. Our algorithms are based on a parameter \(\gamma\) which allows to explore the spectrum between uniform and deterministic greedy selection in the steps of the algorithm and our theoretical and experimental investigations point out the benefits of incorporating randomness into greedy algorithms for the two considered combinatorial optimization problems.

Bläsius, Thomas; Freiberger, Cedric; Friedrich, Tobias; Katzmann, Maximilian; MontenegroRetana, Felix; Thieffry, Marianne Efficient Shortest Paths in ScaleFree Networks with Underlying Hyperbolic Geometry. International Colloquium on Automata, Languages, and Programming (ICALP) 2018
A common way to accelerate shortest path algorithms on graphs is the use of a bidirectional search, which simultaneously explores the graph from the start and the destination. It has been observed recently that this strategy performs particularly well on scalefree realworld networks. Such networks typically have a heterogeneous degree distribution (e.g., a powerlaw distribution) and high clustering (i.e., vertices with a common neighbor are likely to be connected themselves). These two properties can be obtained by assuming an underlying hyperbolic geometry. To explain the observed behavior of the bidirectional search, we analyze its running time on hyperbolic random graphs and prove that it is \(O(n^{2  1/ α + n^1/(2α) + \delta_{\max})\) with high probability, where \(α ∈ (0.5, 1)\) controls the powerlaw exponent of the degree distribution, and \(\delta_{\max}\) is the maximum degree. This bound is sublinear, improving the obvious worstcase linear bound. Although our analysis depends on the underlying geometry, the algorithm itself is oblivious to it.

Friedrich, Tobias; Rothenberger, Ralf Sharpness of the Satisfiability Threshold for NonUniform Random kSAT. The 21st International Conference on Theory and Applications of Satisfiability Testing (SAT) 2018
We study nonuniform random kSAT on n variables with an arbitrary probability distribution p on the variable occurrences. The number \(t = t(n)\) of randomly drawn clauses at which random formulas go from asymptotically almost surely (a.a.s.) satisfiable to a.a.s. unsatisfiable is called the satisfiability threshold. Such a threshold is called sharp if it approaches a step function as n increases. We show that a threshold t(n) for random kSAT with an ensemble \((p_n)_{n\in\mathbb{N}}\) of arbitrary probability distributions on the variable occurrences is sharp if \(\p\_2^2 = O_n(t^{2/k})\) and \(\p\_∞ = o_n(t^k/(2k1) \log^{(k1)/(2k1)(t))\). This result generalizes Friedgut’s sharpness result from uniform to nonuniform random kSAT and implies sharpness for thresholds of a wide range of random kSAT models with heterogeneous probability distributions, for example such models where the variable probabilities follow a powerlaw distribution.

Bilò, Davide; Lenzner, Pascal On the Tree Conjecture for the Network Creation Game. Symposium on the Theoretical Aspects of Computer Science (STACS) 2018: 14:114:15

Fischbeck, Philipp On the Effectiveness of Data Reduction for Covering Problems in RealWorld Transit Networks. 2018
Given a graph and a set of paths, we want to find the minimal set of vertices such that each path is covered by at least one chosen vertex. Although this problem is NPhard, realworld instances can be solved almost completely by a set of simple reduction rules. We examine this behavior from a theoretical and empirical perspective. First, we show that the problem is easy to solve for forests and cycle graphs. However, the problem is NPhard for a feedback vertex number of 2 and a treewidth of 3. This indicates that the explanation for the effectiveness does not lie in the graph representation of problem instances. Thus, we examine the Hitting Set problem that arises when ignoring the graph representation and interpreting a path as a mere set of vertices. Through this relation, we show that the problem remains NPhard even for very strong restrictions. Hitting Set instances that have a representation as a path graph can be recognized as such in polynomial time. However, finding the graph representation with the fewest edges is NPhard. Based on the analysis of publicly available transit datasets, we show that the realworld instances are clustered and have heterogeneous stations, with the number of lines per station distributed according to a power law. We describe a model to generate random problem instances with adjustable clustering and heterogeneity. We use this model to show that while the heterogeneity does positively influence the effectiveness of the reduction rules, the largest effect comes from the clustering. Lastly, we show a strong relation between the reduction rules for the Hitting Set problem and reduction rules for the Independent Set problem on the intersection graph of the family of sets. We prove that the size of any independent set is a lower bound on the size of the maximum hitting set and show that the two bounds are very close for realworld instances. We show that the reduction rules need to be effective for Independent Set in order for them to be effective for Hitting Set.

Göbel, Andreas; Lagodzinski, J. A. Gregor; Seidel, Karen Counting Homomorphisms to Trees Modulo a Prime. arXiv 2018

Bläsius, Thomas; Friedrich, Tobias; Krohmer, Anton Cliques in Hyperbolic Random Graphs. Algorithmica 2017: 121
Most complex real world networks display scalefree features. This characteristic motivated the study of numerous random graph models with a powerlaw degree distribution. There is, however, no established and simple model which also has a high clustering of vertices as typically observed in real data. Hyperbolic random graphs bridge this gap. This natural model has recently been introduced by hyprg and has shown theoretically and empirically to fulfill all typical properties of real world networks, including powerlaw degree distribution and high clustering. We study cliques in hyperbolic random graphs \(G\) and present new results on the expected number of \(k\)cliques \(E(K_k)\) and the size of the largest clique \(\omega(G)\). We observe that there is a phase transition at powerlaw exponent \(\beta = 3\). More precisely, for \(beta in (2,3)\) we prove \(E(K_k)=n^k (3\beta)/2} \Theta(k)^{k}\) and \(\omega(G)=\Theta(n^(3\beta)/2})\), while for \(\beta\geq3\) we prove \(E(K_k)=n \Theta(k)^{k}\) and \(\omega(G)=\Theta(\log(n)/ \log \log n)\). Furthermore, we show that for \(\beta \geq 3\), cliques in hyperbolic random graphs can be computed in time \(O(n)\). If the underlying geometry is known, cliques can be found with worstcase runtime \(O(m \cdot n^{2.5})\) for all values of \(\beta\).

Anand, S.; Bringmann, Karl; Friedrich, Tobias; Garg, Naveen; Kumar, Amit Minimizing Maximum (Weighted) FlowTime on Related and Unrelated Machines. Algorithmica 2017: 515536
In this paper we initiate the study of job scheduling on related and unrelated machines so as to minimize the maximum flow time or the maximum weighted flow time (when each job has an associated weight). Previous work for these metrics considered only the setting of parallel machines, while previous work for scheduling on unrelated machines only considered \(L_p, p < \infty\) norms. Our main results are: (1) we give an \(O(\epsilon^{3})\)competitive algorithm to minimize maximum weighted flow time on related machines where we assume that the machines of the online algorithm can process \(1+\epsilon\) units of a job in 1 timeunit (\(\epsilon\) speed augmentation). (2) For the objective of minimizing maximum flow time on unrelated machines we give a simple \(2/\epsilon\)competitive algorithm when we augment the speed by \(\epsilon\). For \(m\) machines we show a lower bound of \(\Omega(m)\) on the competitive ratio if speed augmentation is not permitted. Our algorithm does not assign jobs to machines as soon as they arrive. To justify this "drawback" we show a lower bound of \(\Omega(\log m)\) on the competitive ratio of immediate dispatch algorithms. In both these lower bound constructions we use jobs whose processing times are in \(\{1,\infty\}\), and hence they apply to the more restrictive subset parallel setting. (3) For the objective of minimizing maximum weighted flow time on unrelated machines we establish a lower bound of \(\Omega(\log m)\)on the competitive ratio of any online algorithm which is permitted to use \(s = O(1)\) speed machines. In our lower bound construction, job \(j\) has a processing time of \(p_j\) on a subset of machines and infinity on others and has a weight \(1/p_j\). Hence this lower bound applies to the subset parallel setting for the special case of minimizing maximum stretch.

Doerr, Benjamin; Neumann, Frank; Sutton, Andrew M. Time Complexity Analysis of Evolutionary Algorithms on Random Satisfiable kCNF Formulas. Algorithmica 2017: 561586
We contribute to the theoretical understanding of randomized search heuristics by investigating their optimization behavior on satisfiable random ksatisfiability instances both in the planted solution model and the uniform model conditional on satisfiability. Denoting the number of variables by n, our main technical result is that the simple (1+1) evolutionary algorithm with high probability finds a satisfying assignment in time \(O(n \log n)\) when the clausevariable density is at least logarithmic. For low density instances, evolutionary algorithms seem to be less effective, and all we can show is a subexponential upper bound on the runtime for densities below \(1/(k(k1))\). We complement these mathematical results with numerical experiments on a broader density spectrum. They indicate that, indeed, the (1+1) EA is less efficient on lower densities. Our experiments also suggest that the implicit constants hidden in our main runtime guarantee are low. Our main result extends and considerably improves the result obtained by Sutton and Neumann (Lect Notes Comput Sci 8672:942951, 2014) in terms of runtime, minimum density, and clause length. These improvements are made possible by establishing a close fitnessdistance correlation in certain parts of the search space. This approach might be of independent interest and could be useful for other averagecase analyses of randomized search heuristics. While the notion of a fitnessdistance correlation has been around for a long time, to the best of our knowledge, this is the first time that fitnessdistance correlation is explicitly used to rigorously prove a performance statement for an evolutionary algorithm.

Bampas, Evangelos; Göbel, AndreasNikolas; Pagourtzis, Aris; Tentes, Aris On the connection between interval size functions and path counting. Computational Complexity 2017: 421467
We investigate the complexity of hard (\#Pcomplete) counting problems that have easy decision version. By ‘easy decision,’ we mean that deciding whether the result of counting is nonzero is in P. This property is shared by several wellknown problems, such as counting the number of perfect matchings in a given graph or counting the number of satisfying assignments of a given DNF formula. We focus on classes of such hardtocount easytodecide problems which emerged through two seemingly disparate approaches: one taken by Hemaspaandra et al. (SIAM J Comput 36(5):1264–1300, 2007), who defined classes of functions that count the size of intervals of ordered strings, and one followed by Kiayias et al. (Lect Notes Comput Sci 2563:453–463, 2001), who defined the classTotP, consisting of functions that count the total number of paths of NP computations. We provide inclusion and separation relations between TotP and interval size counting classes, by means of new classes that we define in this work. Our results imply that many known #Pcomplete problems with easy decision are contained in the classes defined by Hemaspaandra et al., but are unlikely to be complete for these classes under reductions under which these classes are downward closed, e.g., parsimonious reductions. This, applied to the #MONSAT problem, partially answers an open question of Hemaspaandra et al. We also define a new class of interval size functions which strictly contains FP and is strictly contained in TotP under reasonable complexitytheoretic assumptions. We show that this new class contains hard counting problems.

Friedrich, Tobias; Kötzing, Timo; Krejca, Martin S.; Sutton, Andrew M. The Compact Genetic Algorithm is Efficient under Extreme Gaussian Noise. IEEE Transactions on Evolutionary Computation 2017: 477490
Practical optimization problems frequently include uncertainty about the quality measure, for example due to noisy evaluations. Thus, they do not allow for a straightforward application of traditional optimization techniques. In these settings, randomized search heuristics such as evolutionary algorithms are a popular choice because they are often assumed to exhibit some kind of resistance to noise. Empirical evidence suggests that some algorithms, such as estimation of distribution algorithms (EDAs) are robust against a scaling of the noise intensity, even without resorting to explicit noisehandling techniques such as resampling. In this paper, we want to support such claims with mathematical rigor. We introduce the concept of graceful scaling in which the run time of an algorithm scales polynomially with noise intensity. We study a monotone fitness function over binary strings with additive noise taken from a Gaussian distribution. We show that myopic heuristics cannot efficiently optimize the function under arbitrarily intense noise without any explicit noisehandling. Furthermore, we prove that using a population does not help. Finally we show that a simple EDA called the compact Genetic Algorithm can overcome the shortsightedness of mutationonly heuristics to scale gracefully with noise. We conjecture that recombinative genetic algorithms also have this property.

Galanis, Andreas; Göbel, Andreas; Goldberg, Leslie Ann; Lapinskas, John; Richerby, David Amplifiers for the Moran Process. Journal of the ACM 2017: 5:15:90
The Moran process, as studied by Lieberman, Hauert, and Nowak, is a randomised algorithm modelling the spread of genetic mutations in populations. The algorithm runs on an underlying graph where individuals correspond to vertices. Initially, one vertex (chosen uniformly at random) possesses a mutation, with fitness \(r > 1\). All other individuals have fitness 1. During each step of the algorithm, an individual is chosen with probability proportional to its fitness, and its state (mutant or nonmutant) is passed on to an outneighbour which is chosen uniformly at random. If the underlying graph is strongly connected, then the algorithm will eventually reach fixation, in which all individuals are mutants, or extinction, in which no individuals are mutants. An infinite family of directed graphs is said to be strongly amplifying if, for every \(r > 1\), the extinction probability tends to 0 as the number of vertices increases. A formal definition is provided in the article. Strong amplification is a rather surprising propertyit means that in such graphs, the fixation probability of a uniformly placed initial mutant tends to 1 even though the initial mutant only has a fixed selective advantage of \(r > 1\) (independently of \(n\)). The name "strongly amplifying" comes from the fact that this selective advantage is "amplified." Strong amplifiers have received quite a bit of attention, and Lieberman et al. proposed two potentially strongly amplifying familiessuperstars and metafunnels. Heuristic arguments have been published, arguing that there are infinite families of superstars that are strongly amplifying. The same has been claimed for metafunnels. In this article, we give the first rigorous proof that there is an infinite family of directed graphs that is strongly amplifying. We call the graphs in the family "megastars." When the algorithm is run on an \(n\)vertex graph in this family, starting with a uniformly chosen mutant, the extinction probability is roughly \(n  1/2\) (up to logarithmic factors). We prove that all infinite families of superstars and metafunnels have larger extinction probabilities (as a function of \(n\)). Finally, we prove that our analysis of megastars is fairly tight  there is no infinite family of megastars such that the Moran algorithm gives a smaller extinction probability (up to logarithmic factors). Also, we provide a counterexample which clarifies the literature concerning the isothermal theorem of Lieberman et al.

Boockmeyer, Arne; Fischbeck, Philipp; Neubert, Stefan Fit fürs Studium  Informatik. 2017Rheinwerk Computing.

Friedrich, Tobias; Kötzing, Timo; Wagner, Markus A Generic BetandRun Strategy for Speeding Up Stochastic Local Search. Association for the Advancement of Artificial Intelligence (AAAI) 2017: 801807
A common strategy for improving optimization algorithms is to restart the algorithm when it is believed to be trapped in an inferior part of the search space. However, while specific restart strategies have been developed for specific problems (and specific algorithms), restarts are typically not regarded as a general tool to speed up an optimization algorithm. In fact, many optimization algorithms do not employ restarts at all. Recently, "betandrun" was introduced in the context of mixedinteger programming, where first a number of short runs with randomized initial conditions is made, and then the most promising run of these is continued. In this article, we consider two classical NPcomplete combinatorial optimization problems, traveling salesperson and minimum vertex cover, and study the effectiveness of different betandrun strategies. In particular, our restart strategies do not take any problem knowledge into account, nor are tailored to the optimization algorithm. Therefore, they can be used offtheshelf. We observe that stateoftheart solvers for these problems can benefit significantly from restarts on standard benchmark instances.

Katzmann, Maximilian; Komusiewicz, Christian Systematic Exploration of Larger Local Search Neighborhoods for the Minimum Vertex Cover Problem. Association for the Advancement of Artificial Intelligence (AAAI) 2017: 846852
We investigate the potential of exhaustively exploring larger neighborhoods in local search algorithms for Minimum Vertex Cover. More precisely, we study whether, for moderate values of \(k\), it is feasible and worthwhile to determine, given a graph \(G\) with vertex cover \(C\), if there is a \(k\)swap \(S\) such that \((C \setminus S) cup (S \setminus C)\) is a smaller vertex cover of \(G\). First, we describe an algorithm running in \(\Delta^{O(k) \cdot n\) time for searching the \(k\)swap neighborhood on \(n\)vertex graphs with maximum degree \(\Delta\). Then, we demonstrate that, by devising additional pruning rules that decrease the size of the search space, this algorithm can be implemented so that it solves the problem quickly for \(k \approx 20\). Finally, we show that it is worthwhile to consider moderatelysized \(k\)swap neighborhoods. For our benchmark data set, we show that when combining our algorithm with a hillclimbing approach, the solution quality improves quickly with the radius \(k\) of the local search neighborhood and that in most cases optimal solutions can be found by setting \(k = 21\).

Friedrich, Tobias; Krohmer, Anton; Rothenberger, Ralf; Sutton, Andrew M. Phase Transitions for ScaleFree SAT Formulas. Association for the Advancement of Artificial Intelligence (AAAI) 2017: 38933899
Recently, a number of nonuniform random satisfiability models have been proposed that are closer to practical satisfiability problems in some characteristics. In contrast to uniform random Boolean formulas, scalefree formulas have a variable occurrence distribution that follows a power law. It has been conjectured that such a distribution is a more accurate model for some industrial instances than the uniform random model. Though it seems that there is already an awareness of a threshold phenomenon in such models, there is still a complete picture lacking. In contrast to the uniform model, the critical density threshold does not lie at a single point, but instead exhibits a functional dependency on the powerlaw exponent. For scalefree formulas with clauses of length \(k = 2\), we give a lower bound on the phase transition threshold as a function of the scaling parameter. We also perform computational studies that suggest our bound is tight and investigate the critical density for formulas with higher clause lengths. Similar to the uniform model, on formulas with \(k \geq 3\), we find that the phase transition regime corresponds to a set of formulas that are difficult to solve by backtracking search.

Friedrich, Tobias; Neumann, Frank What's Hot in Evolutionary Computation. Association for the Advancement of Artificial Intelligence (AAAI) 2017: 50645066
We provide a brief overview on some hot topics in the area of evolutionary computation. Our main focus is on recent developments in the areas of combinatorial optimization and realworld applications. Furthermore, we highlight recent progress on the theoretical understanding of evolutionary computing methods.

Hölzl, Rupert; Jain, Sanjay; Schlicht, Philipp; Seidel, Karen; Stephan, Frank Automatic Learning from Repetitive Texts. International Conference on Algorithmic Learning Theory (ALT) 2017: 129150
We study the connections between the learnability of automatic families of languages and the types of text used to present them to a learner. More precisely, we study how restrictions on the number of times that a correct datum appears in a text influence what classes of languages are automatically learnable. We show that an automatic family of languages is automatically learnable from fat text iff it is automatically learnable from thick text iff it is verifiable from balanced text iff it satisfies Angluin's telltale condition. Furthermore, many automatic families are automatically learnable from exponential text. We also study the relationship between automatic learnability and verifiability and show that all automatic families are automatically partially verifiable from exponential text and automatically learnable from thick text.

Kötzing, Timo; Schirneck, Martin; Seidel, Karen Normal Forms in Semantic Language Identification. International Conference on Algorithmic Learning Theory (ALT) 2017: 493516
We consider language learning in the limit from text where all learning restrictions are semantic, that is, where any conjecture may be replaced by a semantically equivalent conjecture. For different such learning criteria, starting with the wellknown TxtGBclearning, we consider three different normal forms: strongly locking learning, consistent learning and (partially) setdriven learning. These normal forms support and simplify proofs and give insight into what behaviors are necessary for successful learning (for example when consistency in conservative learning implies cautiousness and strong decisiveness). We show that strongly locking learning can be assumed for partially setdriven learners, even when learning restrictions apply. We give a very general proof relying only on a natural property of the learning restriction, namely, allowing for simulation on equivalent text. Furthermore, when no restrictions apply, also the converse is true: every strongly locking learner can be made partially setdriven. For several semantic learning criteria we show that learning can be done consistently. Finally, we deduce for which learning restrictions partial setdrivenness and setdrivenness coincide, including a general statement about classes of infinite languages. The latter again relies on a simulation argument.

Gao, Wanru; Friedrich, Tobias; Kötzing, Timo; Neumann, Frank Scaling up Local Search for Minimum Vertex Cover in Large Graphs by Parallel Kernelization. Australasian Conference on Artificial Intelligence (AUSAI) 2017: 131143
We investigate how wellperforming local search algorithms for small or medium size instances can be scaled up to perform well for large inputs. We introduce a parallel kernelization technique that is motivated by the assumption that graphs in medium to large scale are composed of components which are on their own easy for stateoftheart solvers but when hidden in large graphs are hard to solve. To show the effectiveness of our kernelization technique, we consider the wellknown minimum vertex cover problem and two stateoftheart solvers called NuMVC and FastVC. Our kernelization approach reduces an existing large problem instance significantly and produces better quality results on a wide range of benchmark instances and real world graphs.

Wagner, Markus; Friedrich, Tobias; Lindauer, Marius Improving local search in a minimum vertex cover solver for classes of networks. Congress on Evolutionary Computation (CEC) 2017
For the minimum vertex cover problem, a wide range of solvers has been proposed over the years. Most classical exact approaches are encountering run time issues on massive graphs that are considered nowadays. A straightforward alternative approach is then to use heuristics, which make assumptions about the structure of the studied graphs. These assumptions are typically hardcoded and are hoped to work well for a wide range of networks  which is in conflict with the nature of broad benchmark sets. With this article, we contribute in two ways. First, we identify a component in an existing solver that influences its performance depending on the class of graphs, and we then customize instances of this solver for different classes of graphs. Second, we create the first algorithm portfolio for the minimum vertex cover to further improve the performance of a single integrated approach to the minimum vertex cover problem.

Kovacs, Robert; Seufert, Anna; Wall, Ludwig; Chen, HsiangTing; Meinel, Florian; Müller, Willi; You, Sijing; Brehm, Maximilian; Striebel, Jonathan; Kommana, Yannis; Popiak, Alexander; Bläsius, Thomas; Baudisch, Patrick TrussFab: Fabricating Sturdy LargeScale Structures on Desktop 3D Printers. Human Factors in Computing Systems (CHI) 2017: 26062616
We present TrussFab, an integrated endtoend system that allows users to fabricate large scale structures that are sturdy enough to carry human weight. TrussFab achieves the large scale by complementing 3D print with plastic bottles. It does not use these bottles as "bricks" though, but as beams that form structurally sound nodelink structures, also known as trusses, allowing it to handle the forces resulting from scale and load. TrussFab embodies the required engineering knowledge, allowing nonengineers to design such structures and to validate their design using integrated structural analysis. We have used TrussFab to design and fabricate tables and chairs, a 2.5 m long bridge strong enough to carry a human, a functional boat that seats two, and a 5 m diameter dome.

Friedrich, Tobias; Krohmer, Anton; Rothenberger, Ralf; Sauerwald, Thomas; Sutton, Andrew M. Bounds on the Satisfiability Threshold for Power Law Distributed Random SAT. European Symposium on Algorithms (ESA) 2017: 37:137:15
Propositional satisfiability (SAT) is one of the most fundamental problems in computer science. The worstcase hardness of SAT lies at the core of computational complexity theory. The averagecase analysis of SAT has triggered the development of sophisticated rigorous and nonrigorous techniques for analyzing random structures. Despite a long line of research and substantial progress, nearly all theoretical work on random SAT assumes a uniform distribution on the variables. In contrast, realworld instances often exhibit large fluctuations in variable occurrence. This can be modeled by a scalefree distribution of the variables, which results in distributions closer to industrial SAT instances. We study random \(k\)SAT on \(n\) variables, \(m=\Theta(n)\) clauses, and a power law distribution on the variable occurrences with exponent \(\beta\). We observe a satisfiability threshold at \(\beta=(2k1)/(k1)\). This threshold is tight in the sense that instances with \(beta < (2k1)/(k1)\varepsilon\) for any constant \(\varepsilon>0\) are unsatisfiable with high probability (w.h.p.). For \(\beta\ge(2k1)/(k1)+\varepsilon\), the picture is reminiscent of the uniform case: instances are satisfiable w.h.p. for sufficiently small constant clausevariable ratios \(m/n\); they are unsatisfiable above a ratio \(m/n\) that depends on \(\beta\).

Friedrich, Tobias; Kötzing, Timo; Quinzan, Francesco; Sutton, Andrew Michael Resampling vs Recombination: a Statistical Run Time Estimation. Foundations of Genetic Algorithms (FOGA) 2017: 2535
Noise is pervasive in realworld optimization, but there is still little understanding of the interplay between the operators of randomized search heuristics and explicit noisehandling techniques, such as statistical resampling. In this paper, we report on several statistical models and theoretical results that help to clarify this reciprocal relationship for a collection of randomized search heuristics on noisy functions. We consider the optimization of pseudoBoolean functions under additive posterior Gaussian noise and explore the tradeo between noise reduction and the computational cost of resampling. We first perform experiments to find the optimal parameters at a given noise intensity for a mutationonly evolutionary algorithm, a genetic algorithm employing recombination, an estimation of distribution algorithm (EDA), and an ant colony optimization algorithm. We then observe how the optimal parameter depends on the noise intensity for the different algorithms. Finally, we locate the point where statistical resampling costs more than it is worth in terms of run time. We find that the EA requires the highest number of resamples to obtain the best speedup, whereas crossover reduces both the run time and the number of resamples required. Most surprisingly, we find that EDAlike algorithms require no resampling, and can handle noise implicitly.

Pourhassan, Mojgan; Friedrich, Tobias; Neumann, Frank On the Use of the Dual Formulation for Minimum Weighted Vertex Cover in Evolutionary Algorithms. Foundations of Genetic Algorithms (FOGA) 2017: 3744
We consider the weighted minimum vertex cover problem and investigate how its dual formulation can be exploited to design evolutionary algorithms that provably obtain a 2approximation. Investigating multivalued representations, we show that variants of randomized local search and the (1+1) EA achieve this goal in expected pseudopolynomial time. In order to speed up the process, we consider the use of step size adaptation in both algorithms and show that RLS obtains a 2approximation in expected polynomial time while the (1+1) EA still encounters a pseudopolynomial lower bound.

Friedrich, Tobias; Kötzing, Timo; Lagodzinski, J. A. Gregor; Neumann, Frank; Schirneck, Martin Analysis of the (1+1) EA on Subclasses of Linear Functions under Uniform and Linear Constraints. Foundations of Genetic Algorithms (FOGA) 2017: 4554
Linear functions have gained a lot of attention in the area of run time analysis of evolutionary computation methods and the corresponding analyses have provided many effective tools for analyzing more complex problems. In this paper, we consider the behavior of the classical (1+1) Evolutionary Algorithm for linear functions under linear constraint. We show tight bounds in the case where both the objective function and the constraint is given by the OneMax function and present upper bounds as well as lower bounds for the general case. Furthermore, we also consider the LeadingOnes fitness function.

Krejca, Martin S.; Witt, Carsten Lower Bounds on the Run Time of the Univariate Marginal Distribution Algorithm on OneMax. Foundations of Genetic Algorithms (FOGA) 2017: 6579
The Univariate Marginal Distribution Algorithm (UMDA), a popular estimation of distribution algorithm, is studied from a run time perspective. On the classical OneMax benchmark function, a lower bound of \(Omega (\mu \sqrt n + n \log n)\), where \(\mu\) is the population size, on its expected run time is proved. This is the first direct lower bound on the run time of the UMDA. It is stronger than the bounds that follow from general blackbox complexity theory and is matched by the run time of many evolutionary algorithms. The results are obtained through advanced analyses of the stochastic change of the frequencies of bit values maintained by the algorithm, including carefully designed potential functions. These techniques may prove useful in advancing the field of run time analysis for estimation of distribution algorithms in general.

Chauhan, Ankit; Friedrich, Tobias; Quinzan, Francesco Approximating Optimization Problems using EAs on ScaleFree Networks. Genetic and Evolutionary Computation Conference (GECCO) 2017: 235242
It has been experimentally observed that realworld networks follow certain topologicalproperties, such as smallworld, powerlaw etc. To study these networks, many random graph models, such as Preferential Attachment, have been proposed. In this paper, we consider the deterministic properties which capture powerlaw degree distribution and degeneracy. Networks with these properties are known as scalefree networks in the literature. Many interesting problems remain NPhard on scalefree networks. We study the relationship between scalefree properties and the approximationratio of some commonly used evolutionary algorithms. For the Vertex Cover, we observe experimentally that the \((1+1)\) EA always gives the better result than a greedy local search, even when it runs for only \(O(n, \log(n))\) steps. We give the construction of a scalefree network in which a multiobjective algorithm and a greedy algorithm obtain optimal solutions, while the \((1+1)\) EA obtains the worst possible solution with constant probability. We prove that for the Dominating Set, Vertex Cover, Connected Dominating Set and Independent Set, the \((1+1)\) EA obtains constantfactor approximation in expected run time \(O(n, \log(n))\) and \(O(n^4)\) respectively. Whereas, GSEMO gives even better approximation than \((1+1)\) EA in expected run time \(O(n^3)\) for Dominating Set, Vertex Cover and Connected Dominating Set on such networks.

Friedrich, Tobias; Kötzing, Timo; Melnichenko, Anna Analyzing Search Heuristics with Differential Equations. Genetic and Evolutionary Computation Conference (GECCO) 2017: 313314
Drift Theory is currently the most common technique for the analysis of randomized search heuristics because of its broad applicability and the resulting tight first hitting time bounds. The biggest problem when applying a drift theorem is to find a suitable potential function which maps a complex space into a single number, capturing the essence of the state of the search in just one value. We discuss another method for the analysis of randomized search heuristics based on the Theory of Differential Equations. This method considers the deterministic counterpart of the randomized process by replacing probabilistic outcomes by their expectation, and then bounding the error with good probability. We illustrate this by analyzing an Ant Colony Optimization algorithm (ACO) for the Minimum Spanning Tree problem (MST).

Doerr, Benjamin; Kötzing, Timo; Lagodzinski, J. A. Gregor; Lengler, Johannes Bounding Bloat in Genetic Programming. Genetic and Evolutionary Computation Conference (GECCO) 2017: 921928
While many optimization problems work with a fixed number of decision variables and thus a fixedlength representation of possible solutions, genetic programming (GP) works on variablelength representations. A naturally occurring problem is that of bloat (unnecessary growth of solutions) slowing down optimization. Theoretical analyses could so far not bound bloat and required explicit assumptions on the magnitude of bloat. In this paper we analyze bloat in mutationbased genetic programming for the two test functions ORDER and MAJORITY. We overcome previous assumptions on the magnitude of bloat and give matching or closetomatching upper and lower bounds for the expected optimization time. In particular, we show that the \((1+1)\) GP takes (i) \(\Theta(T_\text{init + n\, \log n)\) iterations with bloat control on ORDER as well as MAJORITY; and (ii) \(O(T_{\text{init ,log \,T_{\text{init + n(\log n)^3)\) and \(\Omega(T_\text{init + n \,\log n)\) (and \(\Omega(T_\text{init \,\log \,T_{\text{init}})\) for \(n = 1\)) iterations without bloat control on MAJORITY.

Doerr, Benjamin; Fischbeck, Philipp; Frahnow, Clemens; Friedrich, Tobias; Kötzing, Timo; Schirneck, Martin Island Models Meet Rumor Spreading. Genetic and Evolutionary Computation Conference (GECCO) 2017: 13591366
Island models in evolutionary computation solve problems by a careful interplay of independently running evolutionary algorithms on the island and an exchange of good solutions between the islands. In this work, we conduct rigorous run time analyses for such island models trying to simultaneously obtain good run times and low communication effort. We improve the existing upper bounds for the communication effort (i) by improving the run time bounds via a careful analysis, (ii) by setting the balance between individual computation and communication in a more appropriate manner, and (iii) by replacing the usual communicatewithallneighbors approach with randomized rumor spreading, where each island contacts a randomly chosen neighbor. This epidemic communication paradigm is known to lead to very fast and robust information dissemination in many applications. Our results concern islands running simple (1+1) evolutionary algorithms, we regard ddimensional tori and complete graphs as communication topologies, and optimize the classic test functions OneMax and LeadingOnes.

Doerr, Benjamin; Doerr, Carola; Kötzing, Timo Unknown Solution Length Problems With No Asymptotically Optimal Run Time. Genetic and Evolutionary Computation Conference (GECCO) 2017: 13671374
We revisit the problem of optimizing a fitness function of unknown dimension; that is, we face a function defined over bitstrings of large length \(N\), but only \(n << N\) of them have an influence on the fitness. Neither the position of these relevant bits nor their number is known. In previous work, variants of the \((1 + 1)\) evolutionary algorithm (EA) have been developed that solve, for arbitrary \(s \in N\), such OneMax and LeadingOnes instances, simultaneously for all \(n \in N\), in expected time \(O(n(\log(n))^2 log \log(n) dots \log^{s1 (n)(\log^{(s) (n))^1+\epsilon} )\)and \(O(n^2 \log(n) log \log(n) dots \log^{s1 (n)(\log^{(s) (n))^1+\epsilon} )\), respectively; that is, in almost the same time as if n and the relevant bit positions were known. In this work, we prove the first, almost matching, lower bounds for this setting. For LeadingOnes, we show that, for every \(s \in N\), the \((1 + 1)\) EA with any mutation operator treating zeros and ones equally has an expected run time of \(\omega(n^2 \log(n) log \log(n) dots \log^{(s) (n))\) when facing problem size \(n\). Aiming at closing the small remaining gap, we realize that, quite surprisingly, there is no asymptotically best performance. For any algorithm solving, for all \(n\), all instances of size \(n\) in expected time at most \(T (n)\), there is an algorithm doing the same in time \(T'(n)\) with \(T' = o(T )\). For OneMax we show results of similar flavor.

Shi, Feng; Schirneck, Martin; Friedrich, Tobias; Kötzing, Timo; Neumann, Frank Reoptimization Times of Evolutionary Algorithms on Linear Functions Under Dynamic Uniform Constraints. Genetic and Evolutionary Computation Conference (GECCO) 2017: 14071414
Thee investigations of linear pseudoBoolean functions play a central role in the area of runtime analysis of evolutionary computing techniques. Having an additional linear constraint on a linear function is equivalent to the NPhard knapsack problem and special problem classes thereof have been investigated in recent works. In this paper, we extend these studies to problems with dynamic constraints and investigate the runtime of different evolutionary algorithms to recompute an optimal solution when the constraint bound changes by a certain amount. We study the classical \((1+1)\) EA and populationbased algorithms and show that they recompute an optimal solution very efficiently. Furthermore, we show that a variant of the \((1+(\lambda, \lambda))\) GA can recompute the optimal solution more efficiently in some cases.

Chauhan, Ankit; Lenzner, Pascal; Melnichenko, Anna; Molitor, Louise Selfish Network Creation with NonUniform Edge Cost. Symposium on Algorithmic Game Theory (SAGT) 2017: 160172
Network creation games investigate complex networks from a gametheoretic point of view. Based on the original model by Fabrikant et al. [PODC’03] many variants have been introduced. However, almost all versions have the drawback that edges are treated uniformly, i.e. every edge has the same cost and that this common parameter heavily influences the outcomes and the analysis of these games. We propose and analyze simple and natural parameterfree network creation games with nonuniform edge cost. Our models are inspired by social networks where the cost of forming a link is proportional to the popularity of the targeted node. Besides results on the complexity of computing a best response and on various properties of the sequential versions, we show that the most general version of our model has con stant Price of Anarchy. To the best of our knowledge, this is the first proof of a constant Price of Anarchy for any network creation game.

Friedrich, Tobias; Ihde, Sven; Keßler, Christoph; Lenzner, Pascal; Neubert, Stefan; Schumann, David Efficient Best Response Computation for Strategic Network Formation under Attack. Symposium on Algorithmic Game Theory (SAGT) 2017: 199211
Inspired by real world examples, e.g. the Internet, researchers have introduced an abundance of strategic games to study natural phenomena in networks. Unfortunately, almost all of these games have the conceptual drawback of being computationally intractable, i.e. computing a best response strategy or checking if an equilibrium is reached is NPhard. Thus, a main challenge in the field is to find tractable realistic network formation models. We address this challenge by investigating a very recently introduced model by Goyal et al. [WINE'16] which focuses on robust networks in the presence of a strong adversary who attacks (and kills) nodes in the network and lets this attack spread viruslike to neighboring nodes and their neighbors. Our main result is to establish that this natural model is one of the few exceptions which are both realistic and computationally tractable. In particular, we answer an open question of Goyal et al. by providing an efficient algorithm for computing a best response strategy, which implies that deciding whether the game has reached a Nash equilibrium can be done efficiently as well. Our algorithm essentially solves the problem of computing a minimal connection to a network which maximizes the reachability while hedging against severe attacks on the network infrastructure and may thus be of independent interest.

Bläsius, Thomas; Radermacher, Marcel; Rutter, Ignaz How to Draw a Planarization. Software Seminar (SOFSEM) 2017: 295308
We study the problem of computing straightline drawings of nonplanar graphs with few crossings. We assume that a crossingminimization algorithm is applied first, yielding a planarization, i.e., a planar graph with a dummy vertex for each crossing, that fixes the topology of the resulting drawing. We present and evaluate two different approaches for drawing a planarization in such a way that the edges of the input graph are as straight as possible. The first approach is based on the planaritypreserving forcedirected algorithm ImPrEd, the second approach, which we call Geometric Planarization Drawing, iteratively moves vertices to their locally optimal positions in the given initial drawing.

Friedrich, Tobias; Ihde, Sven; Keßler, Christoph; Lenzner, Pascal; Neubert, Stefan; Schumann, David Brief Announcement: Efficient Best Response Computation for Strategic Network Formation under Attack. Symposium on Parallelism in Algorithms and Architectures (SPAA) 2017: 321323
Inspired by real world examples, e.g. the Internet, researchers have introduced an abundance of strategic games to study natural phenomena in networks. Unfortunately, almost all of these games have the conceptual drawback of being computationally intractable, i.e. computing a best response strategy or checking if an equilibrium is reached is NPhard. Thus, a main challenge in the field is to find tractable realistic network formation models. We address this challenge by establishing that the recently introduced model by Goyal et al.[WINE'16], which focuses on robust networks in the presence of a strong adversary, is a rare exception which is both realistic and computationally tractable. In particular, we sketch an efficient algorithm for computing a best response strategy, which implies that deciding whether the game has reached a Nash equilibrium can be done efficiently as well. Our algorithm essentially solves the problem of computing a minimal connection to a network which maximizes the reachability while hedging against severe attacks on the network infrastructure.

Göbel, Andreas Counting, Modular Counting and Graph Homomorphisms. 2017

Gießen, Christian; Kötzing, Timo Robustness of Populations in Stochastic Environments. Algorithmica 2016: 462489
We consider stochastic versions of OneMax and LeadingOnes and analyze the performance of evolutionary algorithms with and without populations on these problems. It is known that the (1+1) EA on OneMax performs well in the presence of very small noise, but poorly for higher noise levels. We extend these results to LeadingOnes and to many different noise models, showing how the application of drift theory can significantly simplify and generalize previous analyses. Most surprisingly, even small populations (of size \(\Theta(\log n))\) can make evolutionary algorithms perform well for high noise levels, well outside the abilities of the (1+1) EA. Larger population sizes are even more beneficial; we consider both parent and offspring populations. In this sense, populations are robust in these stochastic settings.

Kötzing, Timo Concentration of First Hitting Times Under Additive Drift. Algorithmica 2016: 490506
Recent advances in drift analysis have given us better and better tools for understanding random processes, including the run time of randomized search heuristics. In the setting of multiplicative drift we do not only have excellent bounds on the expected run time, but also more general results showing the concentration of the run time. In this paper we investigate the setting of additive drift under the assumption of strong concentration of the "step size" of the process. Under sufficiently strong drift towards the goal we show a strong concentration of the hitting time. In contrast to this, we show that in the presence of small drift a Gambler'sRuinlike behavior of the process overrides the influence of the drift. Finally, in the presence of sufficiently strong negative drift the hitting time is superpolynomial with high probability; this corresponds to the socalled negative drift theorem, for which we give new variants.

Sutton, Andrew M. Superpolynomial Lower Bounds for the (1+1) EA on Some Easy Combinatorial Problems. Algorithmica 2016: 507528
The (1+1) EA is a simple evolutionary algorithm that is known to be efficient on linear functions and on some combinatorial optimization problems. In this paper, we rigorously study its behavior on two easy combinatorial problems: finding the 2coloring of a class of bipartite graphs, and constructing satisfying assignments for a class of satisfiable 2CNF Boolean formulas. We prove that it is inefficient on both problems in the sense that the number of iterations the algorithm needs to minimize the cost functions is superpolynomial with high probability. Our motivation is to better understand the influence of problem instance structure on the runtime character of a simple evolutionary algorithm. We are interested in what kind of structural features give rise to socalled metastable states at which, with probability \(1  o(1)\), the (1+1) EA becomes trapped and subsequently has difficulty leaving. Finally, we show how to modify the (1+1) EA slightly in order to obtain a polynomialtime performance guarantee on both problems.

Bläsius, Thomas; Lehmann, Sebastian; Rutter, Ignaz Orthogonal Graph Drawing with Inflexible Edges. Computational Geometry 2016: 2640
We consider the problem of creating plane orthogonal drawings of 4planar graphs (planar graphs with maximum degree 4) with constraints on the number of bends per edge. More precisely, we have a flexibility function assigning to each edge \(e\) a natural number \(flex(e)\), its flexibility. The problem FlexDraw asks whether there exists an orthogonal drawing such that each edge \(e\) has at most \(flex(e)\) bends. It is known that FlexDraw is NPhard if \(flex(e)=0\) for every edge \(e\). On the other hand, FlexDraw can be solved efficiently if \(flex(e) \ge 1\) and is trivial if \(flex(e) \ge 2\) for every edge \(e\). To close the gap between the NPhardness for \(flex(e)=0\) and the efficient algorithm for \(flex(e) \ge 1\), we investigate the computational complexity of FlexDraw in case only few edges are inflexible (i.e., have flexibility 0). We show that for any \(\epsilon > 0\) FlexDraw is NPcomplete for instances with \(O(n^\epsilon)\) inflexible edges with pairwise distance \(\Omega(n^{1\epsilon})\) (including the case where they induce a matching). On the other hand, we give an FPTalgorithm with running time \(O(2^k cdot n cdot T_flow(n))\), where \(T_{flow(n)\) is the time necessary to compute a maximum flow in a planar flow network with multiple sources and sinks, and \(k\) is the number of inflexible edges having at least one endpoint of degree 4.

Friedrich, Tobias; Kötzing, Timo; Krejca, Martin S.; Sutton, Andrew M. Robustness of Ant Colony Optimization to Noise. Evolutionary Computation 2016: 237254
Recently Ant Colony Optimization (ACO) algorithms have been proven to be efficient in uncertain environments, such as noisy or dynamically changing fitness functions. Most of these analyses focus on combinatorial problems, such as path finding. We analyze an ACO algorithm in a setting where we try to optimize the simple OneMax test function, but with additive posterior noise sampled from a Gaussian distribution. Without noise the classical \((\mu+1)\)EA outperforms any ACO algorithm, with smaller \(\mu\) being better; however, with large noise, the \((\mu+1)\)EA fails, even for high values of \(\mu\) (which are known to help against small noise). In this paper we show that ACO is able to deal with arbitrarily large noise in a graceful manner, that is, as long as the evaporation factor \(p\) is small enough dependent on the parameter \(\delta^2\) of the noise and the dimension \(n\) of the search space \((p = o(1/(n(n + \delta \log n)^2 \log n)))\), optimization will be successful.

Case, John; Kötzing, Timo Strongly nonUshaped language learning results by general techniques. Information and Computation 2016: 115
In learning, a semantic or behavioral Ushape occurs when a learner first learns, then unlearns, and, finally, relearns, some target concept. This paper introduces two general techniques and applies them especially to syntactic Ushapes in learning: one technique to show when they are necessary and one to show when they are unnecessary. The technique for the former is very general and applicable to a much wider range of learning criteria. It employs socalled selflearning classes of languages which are shown to characterize completely one criterion learning more than another. We apply these techniques to show that, for setdriven and rearrangementindependent learning, any kind of Ushapes is unnecessary. Furthermore, we show that Ushapes are necessary in a strong way for iterative learning, contrasting with an earlier result by Case and Moelius that semantic Ushapes are unnecessary for iterative learning.

Jain, Sanjay; Kötzing, Timo; Ma, Junqi; Stephan, Frank On the Role of Update Constraints and TextTypes in Iterative Learning. Information and Computation 2016: 152168
The present work investigates the relationship of iterative learning with other learning criteria such as decisiveness, caution, reliability, nonUshapedness, monotonicity, strong monotonicity and conservativeness. Building on the result of Case and Moelius that iterative learners can be made nonUshaped, we show that they also can be made cautious and decisive. Furthermore, we obtain various special results with respect to oneone texts, fat texts and oneone hypothesis spaces.

Jain, Sanjay; Kötzing, Timo; Stephan, Frank Enlarging learnable classes. Information and Computation 2016: 194207
We study which classes of recursive functions satisfy that their union with any other explanatorily learnable class of recursive functions is again explanatorily learnable. We provide sufficient criteria for classes of recursive functions to satisfy this property and also investigate its effective variants. Furthermore, we study the question which learners can be effectively extended to learn a larger class of functions. We solve an open problem by showing that there is no effective procedure which does this task on all learners which do not learn a dense class of recursive functions. However, we show that there are two effective extension procedures such that each learner is extended by one of them.

Kötzing, Timo; Palenta, Raphaela A map of update constraints in inductive inference. Theoretical Computer Science 2016: 424
We investigate how different learning restrictions reduce learning power and how the different restrictions relate to one another. We give a complete map for nine different restrictions both for the cases of complete information learning and setdriven learning. This completes the picture for these wellstudied delayable learning restrictions. A further insight is gained by different characterizations of conservative learning in terms of variants of cautious learning. Our analyses greatly benefit from general theorems we give, for example showing that learners with exclusively delayable restrictions can always be assumed total.

Case, John; Kötzing, Timo Topological Separations in Inductive Inference. Theoretical Computer Science 2016: 3345
Re learning in the limit from positive data, a major concern is which classes of languages are learnable with respect to a given learning criterion. We are particularly interested herein in the reasons for a class of languages to be unlearnable. We consider two types of reasons. One type is called topological where it does not help if the learners are allowed to be uncomputable (an example of Gold's is that no class containing an infinite language and all its finite sublanguages is learnable  even by an uncomputable learner). Another reason is called computational (where the learners are required to be algorithmic). In particular, two learning criteria might allow for learning different classes of languages from one another  but with dependence on whether the unlearnability is of type topological or computational. In this paper we formalize the idea of two learning criteria separating topologically in learning power. This allows us to study more closely why two learning criteria separate in learning power. For a variety of learning criteria, concerning vacillatory, monotone, (several kinds of) iterative and feedback learning, we show that certain learning criteria separate topologically, and certain others, which are known to separate, are shown not to separate topologically. Showing that learning criteria do not separate topologically implies that any known separation must necessarily exploit algorithmicity of the learner.

Bläsius, Thomas; Rutter, Ignaz A new perspective on clustered planarity as a combinatorial embedding problem. Theoretical Computer Science 2016: 306315
The clustered planarity problem (cplanarity) asks whether a hierarchically clustered graph admits a planar drawing such that the clusters can be nicely represented by regions. We introduce the cdtree data structure and give a new characterization of cplanarity. It leads to efficient algorithms for cplanarity testing in the following cases. (i) Every cluster and every cocluster (complement of a cluster) has at most two connected components. (ii) Every cluster has at most five outgoing edges. Moreover, the cdtree reveals interesting connections between cplanarity and planarity with constraints on the order of edges around vertices. On one hand, this gives rise to a bunch of new open problems related to cplanarity, on the other hand it provides a new perspective on previous results.

Bläsius, Thomas; Rutter, Ignaz Simultaneous PQOrdering with Applications to Constrained Embedding Problems. Transactions on Algorithms 2016: 16
In this article, we define and study the new problem of SIMULTANEOUS PQORDERING. Its input consists of a set of PQtrees, which represent sets of circular orders of their leaves, together with a set of childparent relations between these PQtrees, such that the leaves of the child form a subset of the leaves of the parent. SIMULTANEOUS PQORDERING asks whether orders of the leaves of each of the trees can be chosen simultaneously; that is, for every childparent relation, the order chosen for the parent is an extension of the order chosen for the child. We show that SIMULTANEOUS PQORDERING is NPcomplete in general, and we identify a family of instances that can be solved efficiently, the 2fixed instances. We show that this result serves as a framework for several other problems that can be formulated as instances of SIMULTANEOUS PQORDERING. In particular, we give lineartime algorithms for recognizing simultaneous interval graphs and extending partial interval representations. Moreover, we obtain a lineartime algorithm for PARTIALLY PQCONSTRAINED PLANARITY for biconnected graphs, which asks for a planar embedding in the presence of 16 PQtrees that restrict the possible orderings of edges around vertices, and a quadratictime algorithm for SIMULTANEOUS EMBEDDING WITH FIXED EDGES for biconnected graphs with a connected intersection. Both results can be extended to the case where the input graphs are not necessarily biconnected but have the property that each cutvertex is contained in at most two nontrivial blocks. This includes, for example, the case where both graphs have a maximum degree of 5.

Bläsius, Thomas; Rutter, Ignaz; Wagner, Dorothea Optimal Orthogonal Graph Drawing with Convex Bend Costs. Transactions on Algorithms 2016: 33
Traditionally, the quality of orthogonal planar drawings is quantified by the total number off bends or the maximum number of bends per edge. However, this neglects that, in typical applications, edges have varying importance. We consider the problem OptimalFlexDraw that is defined as follows. Given a planar graph \(G\) on \(n\) vertices with maximum degree 4 (4planar graph) and for each edge \(e\) a cost function \(cost_e \colon N_0 \rightarrow R\) defining costs depending on the number of bends \(e\) has, compute a planar orthogonal drawing of \(G\) of minimum cost. In this generality OptimalFlexDraw is NPhard. We show that it can be solved efficiently if (1) the cost function of each edge is convex and (2) the first bend on each edge does not cause any cost. Our algorithm takes time \(O(n, cdot, T_flow(n)\) and \(O(n^2, cdot, T_flow(n))\) for biconnected and connected graphs, respectively, where \(T_{flow(n)\) denotes the time to compute a minimumcost flow in a planar network with multiple sources and sinks. Our result is the first polynomialtime bendoptimization algorithm for general 4planar graphs optimizing over all embeddings. Previous work considers restricted graph classes and unit costs.

Göbel, Andreas; Goldberg, Leslie Ann; Richerby, David Counting Homomorphisms to SquareFree Graphs, Modulo 2. Transactions on Computation Theory 2016: 12:112:29
We study the problem \( \oplus \)HomsToH of counting, modulo 2, the homomorphisms from an input graph to a fixed undirected graph \(H\). A characteristic feature of modular counting is that cancellations make wider classes of instances tractable than is the case for exact (nonmodular) counting; thus, subtle dichotomy theorems can arise. We show the following dichotomy: for any \(H\) that contains no 4cycles, \(\oplus\)HomsToH is either in polynomial time or is \(\oplus\)Pcomplete. This partially confirms a conjecture of Faben and Jerrum that was previously only known to hold for trees and for a restricted class of treewidth2 graphs called cactus graphs. We confirm the conjecture for a rich class of graphs, including graphs of unbounded treewidth. In particular, we focus on squarefree graphs, which are graphs without 4cycles. These graphs arise frequently in combinatorics, for example, in connection with the strong perfect graph theorem and in certain graph algorithms. Previous dichotomy theorems required the graph to be treelike so that treelike decompositions could be exploited in the proof. We prove the conjecture for a much richer class of graphs by adopting a much more general approach.

Arndt, Tobias; Hafner, Danijar; Kellermeier, Thomas; Krogmann, Simon; Razmjou, Armin; Krejca, Martin S.; Rothenberger, Ralf; Friedrich, Tobias Probabilistic Routing for OnStreet Parking Search. European Symposium on Algorithms (ESA) 2016: 6:16:13
An estimated \(30\%\) of urban traffic is caused by search for parking spots. Traffic could be reduced by suggesting effective routes leading along potential parking spots. In this paper, we formalize parking search as a probabilistic problem on a road graph and show that it is NPcomplete. We explore heuristics that optimize for the driving duration and the walking distance to the destination. Routes are constrained to reach a certain probability threshold of finding a spot. Empirically estimated probabilities of successful parking attempts are provided by TomTom on a perstreet basis. We release these probabilities as a dataset of about 80,000 roads covering the Berlin area. This allows to evaluate parking search algorithms on a real road network with realistic probabilities for the first time. However, for many other areas, parking probabilities are not openly available. Because they are effortful to collect, we propose an algorithm that relies on conventional road attributes only. Our experiments show that this algorithm comes close to the baseline by a factor of 1.3 in our cost measure. This leads to the conclusion that conventional road attributes may be sufficient to compute reasonably good parking search routes.

Baum, Moritz; Bläsius, Thomas; Gemsa, Andreas; Rutter, Ignaz; Wegner, Franziska Scalable Exact Visualization of Isocontours in Road Networks via MinimumLink Paths. European Symposium on Algorithms (ESA) 2016: 7:17:18
Isocontours in road networks represent the area that is reachable from a source within a given resource limit. We study the problem of computing accurate isocontours in realistic, largescale networks. We propose isocontours represented by polygons with minimum number of segments that separate reachable and unreachable components of the network. Since the resulting problem is not known to be solvable in polynomial time, we introduce several heuristics that run in (almost) linear time and are simple enough to be implemented in practice. A key ingredient is a new practical lineartime algorithm for minimumlink paths in simple polygons. Experiments in a challenging realistic setting show excellent performance of our algorithms in practice, computing nearoptimal solutions in a few milliseconds on average, even for long ranges.

Bläsius, Thomas; Friedrich, Tobias; Krohmer, Anton Hyperbolic Random Graphs: Separators and Treewidth. European Symposium on Algorithms (ESA) 2016: 15:115:16
When designing and analyzing algorithms, one can obtain better and more realistic results for practical instances by assuming a certain probability distribution on the input. The worstcase runtime is then replaced by the expected runtime or by bounds that hold with high probability (whp), i.e., with probability \(1  O(1/n)\), on the random input. Hyperbolic random graphs can be used to model complex realworld networks as they share many important properties such as a small diameter, a large clustering coefficient, and a powerlaw degreedistribution. Divide and conquer is an important algorithmic design principle that works particularly well if the instance admits small separators. We show that hyperbolic random graphs in fact have comparatively small separators. More precisely, we show that a hyperbolic random graph can be expected to have a balanced separator hierarchy with separators of size \(O(\sqrt{n^{(3\beta)}})\), \(O(\log n)\), and \(O(1)\) if \(2 < \beta < 3\), \(\beta = 3\) and \(3 < \beta\), respectively (\(\beta\) is the powerlaw exponent). We infer that these graphs have whp a treewidth of \(O(\sqrt{n^{(3  \beta)}})\), \(O(\log^{2}n)\), and \(O(\log n)\), respectively. For \(2 < \beta < 3\), this matches a known lower bound. For the more realistic (but harder to analyze) binomial model, we still prove a sublinear bound on the treewidth. To demonstrate the usefulness of our results, we apply them to obtain fast matching algorithms and an approximation scheme for Independent Set.

Bläsius, Thomas; Friedrich, Tobias; Krohmer, Anton; Laue, Sören Efficient Embedding of ScaleFree Graphs in the Hyperbolic Plane. European Symposium on Algorithms (ESA) 2016: 16:116:18
EATCS Best Paper Award
Hyperbolic geometry appears to be intrinsic in many large real networks. We construct and implement a new maximum likelihood estimation algorithm that embeds scalefree graphs in the hyperbolic space. All previous approaches of similar embedding algorithms require a runtime of \(\Omega(n^{2})\). Our algorithm achieves quasilinear runtime, which makes it the first algorithm that can embed networks with hundreds of thousands of nodes in less than one hour. We demonstrate the performance of our algorithm on artificial and real networks. In all typical metrics like Loglikelihood and greedy routing our algorithm discovers embeddings that are very close to the ground truth.

Chauhan, Ankit; Friedrich, Tobias; Rothenberger, Ralf Greed is Good for Deterministic ScaleFree Networks. Foundations of Software Technology and Theoretical Computer Science (FSTTCS) 2016: 33:133:15
Large realworld networks typically follow a powerlaw degree distribution. To study such networks, numerous random graph models have been proposed. However, realworld networks are not drawn at random. Therefore, Brach, Cygan, Lacki, and Sankowski [SODA 2016] introduced two natural deterministic conditions: (1) a powerlaw upper bound on the degree distribution (PLBU) and (2) powerlaw neighborhoods, that is, the degree distribution of neighbors of each vertex is also upper bounded by a power law (PLBN). They showed that many realworld networks satisfy both deterministic properties and exploit them to design faster algorithms for a number of classical graph problems. We complement the work of Brach et al. by showing that some wellstudied random graph models exhibit both the mentioned PLB properties and additionally also a powerlaw lower bound on the degree distribution (PLBL). All three properties hold with high probability for ChungLu Random Graphs and Geometric Inhomogeneous Random Graphs and almost surely for Hyperbolic Random Graphs. As a consequence, all results of Brach et al. also hold with high probability or almost surely for those random graph classes. In the second part of this work we study three classical NPhard combinatorial optimization problems on PLB networks. It is known that on general graphs with maximum degree \(\Delta\), a greedy algorithm, which chooses nodes in the order of their degree, only achieves a \(\Omega(\ln \Delta)\)approximation for Minimum Vertex Cover and Minimum Dominating Set, and a \(\Omega(\Delta)\)approximation for Maximum Independent Set. We prove that the PLBU property suffices for the greedy approach to achieve a constantfactor approximation for all three problems. We also show that all three combinatorial optimization problems are APXcomplete even if all PLBproperties holds hence, PTAS cannot be expected unless P=NP.

Friedrich, Tobias; Kötzing, Timo; Quinzan, Francesco; Sutton, Andrew M. Ant Colony Optimization Beats Resampling on Noisy Functions. Genetic and Evolutionary Computation Conference (GECCO) 2016: 34
Despite the pervasiveness of noise in realworld optimization, there is little understanding of the interplay between the operators of randomized search heuristics and explicit noisehandling techniques such as statistical resampling. Ant Colony Optimization (ACO) algorithms are claimed to be particularly wellsuited to dynamic and noisy problems, even without explicit noisehandling techniques. In this work, we empirically investigate the tradeoffs between resampling an the noisehandling abilities of ACO algorithms. Our main focus is to locate the point where resampling costs more than it is worth.

Friedrich, Tobias; Kötzing, Timo; Krejca, Martin S.; Sutton, Andrew M. The Benefit of Recombination in Noisy Evolutionary Search. Genetic and Evolutionary Computation Conference (GECCO) 2016: 161162
Practical optimization problems frequently include uncertainty about the quality measure, for example due to noisy evaluations. Thus, they do not allow for a straightforward application of traditional optimization techniques. In these settings, randomized search heuristics such as evolutionary algorithms are a popular choice because they are often assumed to exhibit some kind of resistance to noise. Empirical evidence suggests that some algorithms, such as estimation of distribution algorithms (EDAs) are robust against a scaling of the noise intensity, even without resorting to explicit noisehandling techniques such as resampling. In this paper, we want to support such claims with mathematical rigor. We introduce the concept of graceful scaling in which the run time of an algorithm scales polynomially with noise intensity. We study a monotone fitness function over binary strings with additive noise taken from a Gaussian distribution. We show that myopic heuristics cannot efficiently optimize the function under arbitrarily intense noise without any explicit noisehandling. Furthermore, we prove that using a population does not help. Finally we show that a simple EDA called the Compact Genetic Algorithm can overcome the shortsightedness of mutationonly heuristics to scale gracefully with noise. We conjecture that recombinative genetic algorithms also have this property.

Dang, DucCuong; Friedrich, Tobias; Krejca, Martin S.; Kötzing, Timo; Lehre, Per Kristian; Oliveto, Pietro S.; Sudholt, Dirk; Sutton, Andrew Michael Escaping Local Optima with Diversity Mechanisms and Crossover. Genetic and Evolutionary Computation Conference (GECCO) 2016: 645652
Population diversity is essential for the effective use of any crossover operator. We compare seven commonly used diversity mechanisms and prove rigorous run time bounds for the \((\mu+1)\) GA using uniform crossover on the fitness function \(Jump_k\). All previous results in this context only hold for unrealistically low crossover probability \(p_c=O(k/n)\), while we give analyses for the setting of constant \(p_c < 1\) in all but one case. Our bounds show a dependence on the problem size \(n\), the jump length \(k\), the population size \(\mu\), and the crossover probability \(p_c\). For the typical case of constant \(k > 2\) and constant \(p_c\), we can compare the resulting expected optimisation times for different diversity mechanisms assuming an optimal choice of \(\mu\): \(O(n^{k1})\) for duplicate elimination/minimisation, \(O(n^2 \log n)\) for maximising the convex hull, \(O(n \log n)\) for det. crowding (assuming \(p_c = k/n\)), \(O(n \log n)\) for maximising the Hamming distance, \(O(n \log n)\) for fitness sharing, \(O(n \log n)\) for the singlereceiver island model. This proves a sizeable advantage of all variants of the \((\mu+1)\) GA compared to the (1+1) EA, which requires \(\Theta(n^k)\). In a short empirical study we confirm that the asymptotic differences can also be observed experimentally.

Friedrich, Tobias; Kötzing, Timo; Krejca, Martin S.; Nallaperuma, Samadhi; Neumann, Frank; Schirneck, Martin Fast Building Block Assembly by Majority Vote Crossover. Genetic and Evolutionary Computation Conference (GECCO) 2016: 661668
Different works have shown how crossover can help with building block assembly. Typically, crossover might get lucky to select good building blocks from each parent, but these lucky choices are usually rare. In this work we consider a crossover operator which works on three parent individuals. In each component, the offspring inherits the value present in the majority of the parents; thus, we call this crossover operator majority vote. We show that, if good components are sufficiently prevalent in the individuals, majority vote creates an optimal individual with high probability. Furthermore, we show that this process can be amplified: as long as components are good independently and with probability at least \(1/2+\delta\), we require only \(O(\log 1/\delta + \log \log n)\) successive stages of majority vote to create an optimal individual with high probability! We show how this applies in two scenarios. The first scenario is the Jump test function. With sufficient diversity, we get an optimization time of \(O(n \log n)\) even for jump sizes as large as \(O(n^{(1/2\epsilon)})\). Our second scenario is a family of vertex cover instances. Majority vote optimizes this family efficiently, while local searches fail and only highly specialized twoparent crossovers are successful.

Doerr, Benjamin; Doerr, Carola; Kötzing, Timo The Right Mutation Strength for MultiValued Decision Variables. Genetic and Evolutionary Computation Conference (GECCO) 2016: 11151122
The most common representation in evolutionary computation are bit strings. This is ideal to model binary decision variables, but less useful for variables taking more values. With very little theoretical work existing on how to use evolutionary algorithms for such optimization problems, we study the run time of simple evolutionary algorithms on some OneMaxlike functions defined over \(\Omega=\{0,1,\dots,r1\}n\). More precisely, we regard a variety of problem classes requesting the componentwise minimization of the distance to an unknown target vector \(z \in \Omega\). For such problems we see a crucial difference in how we extend the standardbit mutation operator to these multivalued domains. While it is natural to select each position of the solution vector to be changed independently with probability \(1/n\), there are various ways to then change such a position. If we change each selected position to a random value different from the original one, we obtain an expected run time of \(\Theta(nr\log n)\). If we change each selected position by either +1 or 1 (random choice), the optimization time reduces to \(\Theta(nr+n\log n)\). If we use a random mutation strength \(i \in \{0,1,\dots,r1\}n\) with probability inversely proportional to \(i\) and change the selected position by either +\(i\) or \(i\) (random choice), then the optimization time becomes \(\Theta(n\log(r)(\log(n)+\log(r)))\), bringing down the dependence on \(r\) from linear to polylogarithmic. One of our results depends on a new variant of the lower bounding multiplicative drift theorem.

Friedrich, Tobias; Kötzing, Timo; Krejca, Martin S. EDAs cannot be Balanced and Stable. Genetic and Evolutionary Computation Conference (GECCO) 2016: 11391146
Estimation of Distribution Algorithms (EDAs) work by iteratively updating a distribution over the search space with the help of samples from each iteration. Up to now, theoretical analyses of EDAs are scarce and present run time results for specific EDAs. We propose a new framework for EDAs that captures the idea of several known optimizers, including PBIL, UMDA, \(\lambda\)MMASIB, cGA, and \((1,\lambda)\)EA. Our focus is on analyzing two core features of EDAs: a balanced EDA is sensitive to signals in the fitness; a stable EDA remains uncommitted under a biasless fitness function. We prove that no EDA can be both balanced and stable. The LeadingOnes function is a prime example where, at the beginning of the optimization, the fitness function shows no bias for many bits. Since many wellknown EDAs are balanced and thus not stable, they are not wellsuited to optimize LeadingOnes. We give a stable EDA which optimizes LeadingOnes within a time of \(O(n\,\log n)\).

Galanis, Andreas; Göbel, Andreas; Goldberg, LeslieAnn; Lapinskas, John; Richerby, David Amplifiers for the Moran Process. International Colloquium on Automata, Languages and Programming (ICALP) 2016: 62:162:13
The Moran process, as studied by Lieberman, Hauert and Nowak, is a randomised algorithm modelling the spread of genetic mutations in populations. The algorithm runs on an underlying graph where individuals correspond to vertices. Initially, one vertex (chosen uniformly at random) possesses a mutation, with fitness \(r > 1\). All other individuals have fitness 1. During each step of the algorithm, an individual is chosen with probability proportional to its fitness, and its state (mutant or nonmutant) is passed on to an outneighbour which is chosen uniformly at random. If the underlying graph is strongly connected then the algorithm will eventually reach fixation, in which all individuals are mutants, or extinction, in which no individuals are mutants. An infinite family of directed graphs is said to be strongly amplifying if, for every \(r > 1\), the extinction probability tends to 0 as the number of vertices increases. Strong amplification is a rather surprising property  it means that in such graphs, the fixation probability of a uniformlyplaced initial mutant tends to 1 even though the initial mutant only has a fixed selective advantage of \(r > 1\) (independently of \(n\)). The name "strongly amplifying" comes from the fact that this selective advantage is "amplified". Strong amplifiers have received quite a bit of attention, and Lieberman et al. proposed two potentially stronglyamplifying families  superstars and metafunnels. Heuristic arguments have been published, arguing that there are infinite families of superstars that are strongly amplifying. The same has been claimed for metafunnels. We give the first rigorous proof that there is an infinite family of directed graphs that is strongly amplifying. We call the graphs in the family "megastars". When the algorithm is run on an nvertex graph in this family, starting with a uniformlychosen mutant, the extinction probability is roughly \(n^{1/2}\) (up to logarithmic factors). We prove that all infinite families of superstars and metafunnels have larger extinction probabilities (as a function of \(n\)). Finally, we prove that our analysis of megastars is fairly tight  there is no infinite family of megastars such that the Moran algorithm gives a smaller extinction probability (up to logarithmic factors). Also, we provide a counterexample which clarifies the literature concerning the isothermal theorem of Lieberman et al. A full version [Galanis/Göbel/Goldberg/Lapinskas/Richerby, Preprint] containing detailed proofs is available at http://arxiv.org/abs/1512.05632. Theoremnumbering here matches the full version.

Bläsius, Thomas; Friedrich, Tobias; Schirneck, Martin The Parameterized Complexity of Dependency Detection in Relational Databases. International Symposium on Parameterized and Exact Computation (IPEC) 2016: 6:16:13
We study the parameterized complexity of classical problems that arise in the profiling of relational data. Namely, we characterize the complexity of detecting unique column combinations (candidate keys), functional dependencies, and inclusion dependencies with the solution size as parameter. While the discovery of uniques and functional dependencies, respectively, turns out to be W[2]complete, the detection of inclusion dependencies is one of the first natural problems proven to be complete for the class W[3]. As a side effect, our reductions give insights into the complexity of enumerating all minimal unique column combinations or functional dependencies.

Friedrich, Tobias ScaleFree Networks, Hyperbolic Geometry, and Efficient Algorithms. Symposium on Mathematical Foundations of Computer Science (MFCS) 2016: 4:14:3
Invited Talk
The node degrees of large realworld networks often follow a powerlaw distribution. Such scalefree networks can be social networks, internet topologies, the web graph, power grids, or many other networks from literally hundreds of domains. The talk will introduce several mathematical models of scalefree networks (e.g. preferential attachment graphs, ChungLu graphs, hyperbolic random graphs) and analyze some of their properties (e.g. diameter, average distance, clustering). We then present several algorithms and distributed processes on and for these network models (e.g. rumor spreading, load balancing, deanonymization, embedding) and discuss a number of open problems. The talk assumes no prior knowledge about scalefree networks, distributed computing or hyperbolic geometry.

Gao, Wanru; Friedrich, Tobias; Neumann, Frank FixedParameter Single Objective Search Heuristics for Minimum Vertex Cover. Parallel Problem Solving From Nature (PPSN) 2016: 740750
We consider how wellknown branching approaches for the classical minimum vertex cover problem can be turned into randomized initialization strategies with provable performance guarantees and investigate them by experimental investigations. Furthermore, we show how these techniques can be built into local search components and analyze a basic local search variant that is similar to a stateoftheart approach called NuMVC. Our experimental results for the two local search approaches show that making use of more complex branching strategies in the local search component can lead to better results on various benchmark graphs.

Friedrich, Tobias; Kötzing, Timo; Krejca, Martin S.; Sutton, Andrew M. Graceful Scaling on Uniform versus SteepTailed Noise. Parallel Problem Solving From Nature (PPSN) 2016: 761770
Recently, different evolutionary algorithms (EAs) have been analyzed in noisy environments. The most frequently used noise model for this was additive posterior noise (noise added after the fitness evaluation) taken from a Gaussian distribution. In particular, for this setting it was shown that the \((\mu + 1)\)EA on OneMax does not scale gracefully (higher noise cannot efficiently be compensated by higher \(\mu\)). In this paper we want to understand whether there is anything special about the Gaussian distribution which makes the \((\mu + 1)\)EA not scale gracefully. We keep the setting of posterior noise, but we look at other distributions. We see that for exponential tails the \((\mu + 1)\)EA on OneMax does also not scale gracefully, for similar reasons as in the case of Gaussian noise. On the other hand, for uniform distributions (as well as other, similar distributions) we see that the \((\mu + 1)\)EA on OneMax does scale gracefully, indicating the importance of the noise model.

Friedrich, Tobias; Kötzing, Timo; Sutton, Andrew M. On the Robustness of Evolving Populations. Parallel Problem Solving From Nature (PPSN) 2016: 771781
Most theoretical work that studies the benefit of recombination focuses on the ability of crossover to speed up optimization time on specific search problems. In this paper, we take a slightly different perspective and investigate recombination in the context of evolving solutions that exhibit \(\emph{mutational}\) robustness, i.e., they display insensitivity to small perturbations. Various models in population genetics have demonstrated that increasing the effective recombination rate promotes the evolution of robustness. We show this result also holds in the context of evolutionary computation by proving crossover promotes the evolution of robust solutions in the standard \((\mu+1)\) GA. Surprisingly, our results show that the effect is present even when robust solutions are at a selective disadvantage due to lower fitness values.

Doerr, Benjamin; Doerr, Carola; Kötzing, Timo Provably Optimal SelfAdjusting Step Sizes for MultiValued Decision Variables. Parallel Problem Solving From Nature (PPSN) 2016: 782791
We regard the problem of maximizing a OneMaxlike function defined over an alphabet of size \(r\). In previous work [GECCO 2016] we have investigated how three different mutation operators influence the performance of Randomized Local Search (RLS) and the (1+1) Evolutionary Algorithm. This work revealed that among these natural mutation operators none is superior to the other two for any choice of \(r\). We have also given in [GECCO 2016] some indication that the best achievable run time for large \(r\) is \(\Theta(n log r(\log n + \log r))\), regardless of how the mutation operator is chosen, as long as it is a static choice (i.e., the distribution used for variation of the current individual does not change over time). Here in this work we show that we can achieve a better performance if we allow for adaptive mutation operators. More precisely, we analyze the performance of RLS using a selfadjusting mutation strength. In this algorithm the size of the steps taken in each iteration depends on the success of previous iterations. That is, the mutation strength is increased after a successful iteration and it is decreased otherwise. We show that this idea yields an expected optimization time of \(\Theta(n(\log n + \log r))\), which is optimal among all comparisonbased search heuristics. This is the first time that selfadjusting parameter choices are shown to outperform static choices on a discrete multivalued optimization problem.

Dang, DucCuong; Lehre, Per Kristian; Friedrich, Tobias; Kötzing, Timo; Krejca, Martin S.; Oliveto, Pietro S.; Sudholt, Dirk; Sutton, Andrew M. Emergence of Diversity and its Benefits for Crossover in Genetic Algorithms. Parallel Problem Solving From Nature (PPSN) 2016: 890900
Population diversity is essential for avoiding premature convergence in Genetic Algorithms (GAs) and for the effective use of crossover. Yet the dynamics of how diversity emerges in populations are not well understood. We use rigorous runtime analysis to gain insight into population dynamics and GA performance for a standard \((\mu+1)\) GA and the \(Jump_k\) test function. By studying the stochastic process underlying the size of the largest collection of identical genotypes we show that the interplay of crossover followed by mutation may serve as a catalyst leading to a sudden burst of diversity. This leads to improvements of the expected optimisation time of order \(\Omega(n/ \log n)\) compared to mutationonly algorithms like the \((1+1)\) EA.

Chauhan, Ankit; Lenzner, Pascal; Melnichenko, Anna; Münn, Martin On Selfish Creation of Robust Networks. Symposium on Algorithmic Game Theory (SAGT) 2016: 141152
Robustness is one of the key properties of nowadays networks. However, robustness cannot be simply enforced by design or regulation since many important networks, most prominently the Internet, are not created and controlled by a central authority. Instead, Internetlike networks emerge from strategic decisions of many selfish agents. Interestingly, although lacking a coordinating authority, such naturally grown networks are surprisingly robust while at the same time having desirable properties like a small diameter. To investigate this phenomenon we present the first simple model for selfish network creation which explicitly incorporates agents striving for a central position in the network while at the same time protecting themselves against random edgefailure. We show that networks in our model are diverse and we prove the versatility of our model by adapting various properties and techniques from the nonrobust versions which we then use for establishing bounds on the Price of Anarchy. Moreover, we analyze the computational hardness of finding best possible strategies and investigate the game dynamics of our model.

Kötzing, Timo; Schirneck, Martin Towards an Atlas of Computational Learning Theory. Symposium on Theoretical Aspects of Computer Science (STACS) 2016: 47:147:13
A major part of our knowledge about Computational Learning stems from comparisons of the learning power of different learning criteria. These comparisons inform about tradeoffs between learning restrictions and, more generally, learning settings; furthermore, they inform about what restrictions can be observed without losing learning power. With this paper we propose that one main focus of future research in Computational Learning should be on a structured approach to determine the relations of different learning criteria. In particular, we propose that, for small sets of learning criteria, all pairwise relations should be determined; these relations can then be easily depicted as a map, a diagram detailing the relations. Once we have maps for many relevant sets of learning criteria, the collection of these maps is an Atlas of Computational Learning Theory, informing at a glance about the landscape of computational learning just as a geographical atlas informs about the earth. In this paper we work toward this goal by providing three example maps, one pertaining to partially setdriven learning, and two pertaining to strongly monotone learning. These maps can serve as blueprints for future maps of similar base structure.

Friedrich, Tobias; Wagner, Markus Seeding the initial population of multiobjective evolutionary algorithms: A computational study. Applied Soft Computing 2015: 223230
Most experimental studies initialize the population of evolutionary algorithms with random genotypes. In practice, however, optimizers are typically seeded with good candidate solutions either previously known or created according to some problemspecific method. This seeding has been studied extensively for singleobjective problems. For multiobjective problems, however, very little literature is available on the approaches to seeding and their individual benefits and disadvantages. In this article, we are trying to narrow this gap via a comprehensive computational study on common realvalued test functions. We investigate the effect of two seeding techniques for five algorithms on 48 optimization problems with 2, 3, 4, 6, and 8 objectives. We observe that some functions (e.g., DTLZ4 and the LZ family) benefit significantly from seeding, while others (e.g., WFG) profit less. The advantage of seeding also depends on the examined algorithm.

Bläsius, Thomas; Rutter, Ignaz Disconnectivity and relative positions in simultaneous embeddings. Computational Geometry 2015: 459478
For two planar graphs \(G^1 = ( V^1, E^1)\) and \(G^2 = ( V^2, E^2)\) sharing a common subgraph \(G = G^1 \cap G^2\) the problem Simultaneous Embedding with Fixed Edges (SEFE) asks whether they admit planar drawings such that the common graph is drawn the same. Previous algorithms only work for cases where \(G\) is connected, and hence do not need to handle relative positions of connected components. We consider the problem where \(G\), \(G^1\) and \(G^2\) are not necessarily connected.First, we show that a general instance of SEFE can be reduced in linear time to an equivalent instance where \(V^1 = V^2\) and \(G^1\) and \(G^2\) are connected. Second, for the case where \(G\) consists of disjoint cycles, we introduce the CCtree which represents all embeddings of \(G\) that extend to planar embeddings of \(G^1\). We show that CCtrees can be computed in linear time, and that their intersection is again a CCtree. This yields a lineartime algorithm for SEFE if all \(k\) input graphs (possibly \(k > 2\)) pairwise share the same set of disjoint cycles. These results, including the CCtree, extend to the case where \(G\) consists of arbitrary connected components, each with a fixed planar embedding on the sphere. Then the running time is \(O(n^2)\).

Friedrich, Tobias; Krohmer, Anton Parameterized clique on inhomogeneous random graphs. Discrete Applied Mathematics 2015: 130138
Finding cliques in graphs is a classical problem which is in general NPhard and parameterized intractable. In typical applications like social networks or biological networks, however, the considered graphs are scalefree, i.e., their degree sequence follows a power law. Their specific structure can be algorithmically exploited and makes it possible to solve clique much more efficiently. We prove that on inhomogeneous random graphs with \(n\) nodes and power law exponent \(\beta\), cliques of size \(k\) can be found in time \(O(n)\) for \(\beta \ge 3\) and in time \(O(ne^{k^4})\) for \(2 < \beta < 3\).

Wagner, Markus; Bringmann, Karl; Friedrich, Tobias; Neumann, Frank Efficient optimization of many objectives by approximationguided evolution. European Journal of Operational Research 2015: 465479
Multiobjective optimization problems arise frequently in applications, but can often only be solved approximately by heuristic approaches. Evolutionary algorithms have been widely used to tackle multiobjective problems. These algorithms use different measures to ensure diversity in the objective space but are not guided by a formal notion of approximation. We present a framework for evolutionary multiobjective optimization that allows to work with a formal notion of approximation. This approximationguided evolutionary algorithm (AGE) has a worstcase runtime linear in the number of objectives and works with an archive that is an approximation of the nondominated objective vectors seen during the run of the algorithm. Our experimental results show that AGE finds competitive or better solutions not only regarding the achieved approximation, but also regarding the total hypervolume. For all considered test problems, even for many (i.e., more than ten) dimensions, AGE discovers a good approximation of the Pareto front. This is not the case for established algorithms such as NSGAII, SPEA2, and SMSEMOA. In this paper we compare AGE with two additional algorithms that use very fast hypervolumeapproximations to guide their search. This significantly speeds up the runtime of the hypervolumebased algorithms, which now allows a comparison of the underlying selection schemes.

Friedrich, Tobias; Neumann, Frank; Thyssen, Christian Multiplicative Approximations, Optimal Hypervolume Distributions, and the Choice of the Reference Point. Evolutionary Computation 2015: 131159
Many optimization problems arising in applications have to consider several objective functions at the same time. Evolutionary algorithms seem to be a very natural choice for dealing with multiobjective problems as the population of such an algorithm can be used to represent the tradeoffs with respect to the given objective functions. In this paper, we contribute to the theoretical understanding of evolutionary algorithms for multiobjective problems. We consider indicatorbased algorithms whose goal is to maximize the hypervolume for a given problem by distributing \(\mu\) points on the Pareto front. To gain new theoretical insights into the behavior of hypervolumebased algorithms we compare their optimization goal to the goal of achieving an optimal multiplicative approximation ratio. Our studies are carried out for different Pareto front shapes of biobjective problems. For the class of linear fronts and a class of convex fronts, we prove that maximizing the hypervolume gives the best possible approximation ratio when assuming that the extreme points have to be included in both distributions of the points on the Pareto front. Furthermore, we investigate the choice of the reference point on the approximation behavior of hypervolumebased approaches and examine Pareto fronts of different shapes by numerical calculations.

Chicano, Francisco; Sutton, Andrew M.; Whitley, L. Darrell; Alba, Enrique Fitness Probability Distribution of BitFlip Mutation. Evolutionary Computation 2015: 217248
Bitflip mutation is a common mutation operator for evolutionary algorithms applied to optimize functions over binary strings. In this paper, we develop results from the theory of landscapes and Krawtchouk polynomials to exactly compute the probability distribution of fitness values of a binary string undergoing uniform bitflip mutation. We prove that this probability distribution can be expressed as a polynomial in \(p\), the probability of flipping each bit. We analyze these polynomials and provide closedform expressions for an easy linear problem Onemax, and an NPhard problem, MAXSAT. We also discuss a connection of the results with runtime analysis.

Friedrich, Tobias; Neumann, Frank Maximizing Submodular Functions under Matroid Constraints by Evolutionary Algorithms. Evolutionary Computation 2015: 543558
Many combinatorial optimization problems have underlying goal functions that are submodular. The classical goal is to find a good solution for a given submodular function \(f\) under a given set of constraints. In this paper, we investigate the runtime of a simple single objective evolutionary algorithm called (1+1) EA and a multiobjective evolutionary algorithm called GSEMO until they have obtained a good approximation for submodular functions. For the case of monotone submodular functions and uniform cardinality constraints we show that the GSEMO achieves a \((11/e)\)approximation in expected polynomial time. For the case of monotone functions where the constraints are given by the intersection of \(k \ge 2\) matroids, we show that the (1+1) EA achieves a \((1 + k/\delta)\)approximation in expected polynomial time for any constant \(\delta > 0\). Turning to nonmonotone symmetric submodular functions with \(k \ge 1\) matroid intersection constraints, we show that the GSEMO achieves a \((1/((k+2)(1+\epsilon)))\)approximation in expected time \(O(n^{k+6 \log(n)/\epsilon)\).

Doerr, Benjamin; Doerr, Carola; Kötzing, Timo Unbiased BlackBox Complexities of Jump Functions. Evolutionary Computation 2015: 641670
We analyze the unbiased blackbox complexity of jump functions with small, medium, and large sizes of the fitness plateau surrounding the optimal solution. Among other results, we show that when the jump size is \((1/2\epsilon)n\), that is, only a small constant fraction of the fitness values is visible, then the unbiased blackbox complexities for arities 3 and higher are of the same order as those for the simple OneMax function. Even for the extreme jump function, in which all but the two fitness values \(n/2\) and \(n\) are blanked out, polynomialtime mutationbased (i.e., unary unbiased) blackbox optimization algorithms exist. This is quite surprising given that for the extreme jump function almost the whole search space (all but a \(\Theta(n^{1/2})\) fraction) is a plateau of constant fitness. To prove these results, we introduce new tools for the analysis of unbiased blackbox complexities, for example, selecting the new parent individual not by comparing the fitnesses of the competing search points, but also by taking into account the (empirical) expected fitnesses of their offspring.

Berenbrink, Petra; Cooper, Colin; Friedetzky, Tom; Friedrich, Tobias; Sauerwald, Thomas Randomized diffusion for indivisible loads. Journal of Computer and System Sciences 2015: 159185
We present a new randomized diffusionbased algorithm for balancing indivisible tasks (tokens) on a network. Our aim is to minimize the discrepancy between the maximum and minimum load. The algorithm works as follows. Every vertex distributes its tokens as evenly as possible among its neighbors and itself. If this is not possible without splitting some tokens, the vertex redistributes its excess tokens among all its neighbors randomly (without replacement). In this paper we prove several upper bounds on the load discrepancy for general networks. These bounds depend on some expansion properties of the network, that is, the second largest eigenvalue, and a novel measure which we refer to as refined local divergence. We then apply these general bounds to obtain results for some specific networks. For constantdegree expanders and torus graphs, these yield exponential improvements on the discrepancy bounds compared to the algorithm of Rabani, Sinclair, and Wanka. For hypercubes we obtain a polynomial improvement. In contrast to previous papers, our algorithm is vertexbased and not edgebased. This means excess tokens are assigned to vertices instead to edges, and the vertex reallocates all of its excess tokens by itself. This approach avoids nodes having "negative loads", but causes additional dependencies for the analysis.

Friedrich, Tobias; Hercher, Christian On the kernel size of clique cover reductions for random intersection graphs. Journal of Discrete Algorithms 2015: 128136
Covering all edges of a graph by a minimum number of cliques is a well known NP hard problem. For the parameter \(k\) being the maximal number of cliques to be used, the problem becomes fixed parameter tractable. However, assuming the Exponential Time Hypothesis, there is no kernel of subexponential size in the worstcase. We study the average kernel size for random intersection graphs with \(n\) vertices, edge probability \(p\), and clique covers of size \(k\). We consider the wellknown set of reduction rules of Gramm, Guo, Hüffner, and Niedermeier (2009) and show that with high probability they reduce the graph completely if \(p\) is bounded away from 1 and \(k < c \log n\) for some constant \(c > 0\) . This shows that for large probabilistic graph classes like random intersection graphs the expected kernel size can be substantially smaller than the known exponential worstcase bounds.

Paixão, Tiago; Badkobeh, Golnaz; Barton, Nick H.; Çörüş, Doğan; Dang, DucCuong; Friedrich, Tobias; Lehre, Per Kristian; Sudholt, Dirk; Sutton, Andrew; Trubenová, Barbora Toward a unifying framework for evolutionary processes. Journal of Theoretical Biology 2015: 2843
The theory of population genetics and evolutionary computation have been evolving separately for nearly 30 years. Many results have been independently obtained in both fields and many others are unique to its respective field. We aim to bridge this gap by developing a unifying framework for evolutionary processes that allows both evolutionary algorithms and population genetics models to be cast in the same formal framework. The framework we present here decomposes the evolutionary process into its several components in order to facilitate the identification of similarities between different models. In particular, we propose a classification of evolutionary operators based on the defining properties of the different components. We cast several commonly used operators from both fields into this common framework. Using this, we map different evolutionary and genetic algorithms to different evolutionary regimes and identify candidates with the most potential for the translation of results between the fields. This provides a unified description of evolutionary processes and represents a stepping stone towards new tools and results to both fields.

Göbel, Andreas; Goldberg, Leslie Ann; McQuillan, Colin; Richerby, David; Yamakami, Tomoyuki Counting List Matrix Partitions of Graphs. SIAM Journal on Computing 2015: 10891118
Given a symmetric \(D\times D\) matrix \(M\) over \(0,1,*\), a list \(M\)partition of a graph \(G\) is a partition of the vertices of \(G\) into \(D\) parts which are associated with the rows of \(M\). The part of each vertex is chosen from a given list in such a way that no edge of \(G\) is mapped to a 0 in \(M\) and no nonedge of \(G\) is mapped to a 1 in \(M\). Many important graphtheoretic structures can be represented as list \(M\)partitions including graph colorings, split graphs, and homogeneous sets and pairs, which arise in the proofs of the weak and strong perfect graph conjectures. Thus, there has been quite a bit of work on determining for which matrices \(M\) computations involving list \(M\)partitions are tractable. This paper focuses on the problem of counting list \(M\)partitions, given a graph \(G\) and given a list for each vertex of \(G\). We identify a certain set of "tractable" matrices \(M\). We give an algorithm that counts list \(M\)partitions in polynomial time for every (fixed) matrix \(M\) in this set. The algorithm relies on data structures such as sparsedense partitions and subcube decompositions to reduce each problem instance to a sequence of problem instances in which the lists have a certain useful structure that restricts access to portions of \(M\) in which the interactions of 0s and 1s are controlled. We show how to solve the resulting restricted instances by converting them into particular counting constraint satisfaction problems (\({\#CSP}\)s), which we show how to solve using a constraint satisfaction technique known as arcconsistency. For every matrix \(M\) for which our algorithm fails, we show that the problem of counting list \(M\)partitions is \({\#P}\)complete. Furthermore, we give an explicit characterization of the dichotomy theorem: counting list \(M\)partitions is tractable (in \({FP}\)) if the matrix \(M\) has a structure called a derectangularizing sequence. If \(M\) has no derectangularizing sequence, we show that counting list \(M\)partitions is \({\#P}\)hard. We show that the metaproblem of determining whether a given matrix has a derectangularizing sequence is \({NP}\)complete. Finally, we show that list \(M\)partitions can be used to encode cardinality restrictions in \(M\)partitions problems, and we use this to give a polynomialtime algorithm for counting homogeneous pairs in graphs.

Friedrich, Tobias; He, Jun; Jansen, Thomas; Moraglio, Alberto Genetic and Evolutionary Computation. Theoretical Computer Science 2015: 12
Evolutionary computation and other natureinspired search heuristics are known and applied on a daily basis for 50 years. They remain an important tool in situations where difficult problems need to be solved and no good problemspecific solution is available. Thanks to continuous efforts directed at a theoretical foundation of this broad and complex set of heuristics we have a much improved understanding of their properties, strengths and limitations. This special issue contains a collection of theoretical analyses of quite different natureinspired heuristics, all presented in preliminary form either at the field’s largest conference, the Genetic and Evolutionary Computation Conference (GECCO 2013), or at a small and specialized workshop on Theory of Randomized Search Heuristics (ThRaSH 2013) that provides an opportunity for theoreticians in the field to meet and discuss their work since 2007. All articles presented here have been reworked and significantly expanded beyond their initial presentation and they all witness that theory is now developed well beyond the understanding of very simple evolutionary algorithms on simple example functions.

Fountoulakis, Nikolaos; Friedrich, Tobias; Hermelin, Danny On the averagecase complexity of parameterized clique. Theoretical Computer Science 2015: 1829
The \(k\)Clique problem is a fundamental combinatorial problem that plays a prominent role in classical as well as in parameterized complexity theory. It is among the most wellknown NPcomplete and W[1]complete problems. Moreover, its averagecase complexity analysis has created a long thread of research already since the 1970s. Here, we continue this line of research by studying the dependence of the averagecase complexity of the \(k\)Clique problem on the parameter \(k\). To this end, we define two natural parameterized analogs of efficient averagecase algorithms. We then show that \(k\)Clique admits both analogues for ErdHo}sRényi random graphs of arbitrary density. We also show that \(k\)Clique is unlikely to admit either of these analogs for some specific computable input distribution.

Nguyen, Anh Quang; Sutton, Andrew M.; Neumann, Frank Population size matters: Rigorous runtime results for maximizing the hypervolume indicator. Theoretical Computer Science 2015: 2436
Using the hypervolume indicator to guide the search of evolutionary multiobjective algorithms has become very popular in recent years. We contribute to the theoretical understanding of these algorithms by carrying out rigorous runtime analyses. We consider multiobjective variants of the problems OneMax and LeadingOnes called OMM and LOTZ, respectively, and investigate hypervolumebased algorithms with population sizes that do not allow coverage of the entire Pareto front. Our results show that LOTZ is easier to optimize than OMM for hypervolumebased evolutionary multiobjective algorithms which is contrary to the results on their singleobjective variants and the wellstudied (1+1)EA.

Freydenberger, Dominik D.; Kötzing, Timo Fast Learning of Restricted Regular Expressions and DTDs. Theory of Computing Systems 2015: 11141158
We study the problem of generalizing from a finite sample to a language taken from a predefined language class. The two language classes we consider are subsets of the regular languages and have significance in the specification of XML documents (the classes corresponding to so called chain regular expressions, Chares, and to single occurrence regular expressions, Sores). The previous literature gave a number of algorithms for generalizing to Sores providing a trade off between quality of the solution and speed. Furthermore, a fast but nonoptimal algorithm for generalizing to Chares is known. For each of the two language classes we give an efficient algorithm returning a minimal generalization from the given finite sample to an element of the fixed language class; such generalizations are called descriptive. In this sense, both our algorithms are optimal.

Chauhan, Ankit; Rao, B. V. Raghavendra Parameterized Analogues of Probabilistic Computation. Conference on Algorithms and Discrete Applied Mathematics (CALDAM) 2015: 181192
We study structural aspects of randomized parameterized computation. We introduce a new class W[P]PFPT as a natural parameterized analogue of PP. Our definition uses the machine based characterization of the parameterized complexity class W[P] obtained by Chen et.al [TCS 2005]. We translate most of the structural properties and characterizations of the class PP to the new class W[P]PFPT. We study a parameterization of the polynomial identity testing problem based on the degree of the polynomial computed by the arithmetic circuit. We obtain a parameterized analogue of the well known SchwartzZippel lemma [Schwartz, JACM 80 and Zippel, EUROSAM 79]. Additionally, we introduce a parameterized variant of permanent, and prove its #W[1] completeness.

Bringmann, Karl; Friedrich, Tobias; Klitzke, Patrick Efficient computation of twodimensional solution sets maximizing the epsilonindicator. Congress on Evolutionary Computation (CEC) 2015: 970977
The majority of empirical comparisons of multiobjective evolutionary algorithms (MOEAs) are performed on synthetic benchmark functions. One of the advantages of synthetic test functions is the apriori knowledge of the optimal Pareto front. This allows measuring the proximity to the optimal front for the solution sets returned by the different MOEAs. Such a comparison is only meaningful if the cardinality of all solution sets is bounded by some fixed \(k\). In order to compare MOEAs to the theoretical optimum achievable with \(k\) solutions, we determine best possible \(\epsilon\)indicator values achievable with solution sets of size \(k\), up to an error of \(\delta\). We present a new algorithm with runtime \(O(k cdot \log^2(\delta1))\), which is an exponential improvement regarding the dependence on the error \(\delta\) compared to all previous work. We show mathematical correctness of our algorithm and determine optimal solution sets for sets of cardinality \(k \in \2, 3, 4, 5, 10, 20, 50, 100, 1000\}\) for the well known test suits DTLZ, ZDT, WFG and LZ09 up to error \(\delta = 10^{25}\).

Bläsius, Thomas; Lehmann, Sebastian; Rutter, Ignaz Orthogonal Graph Drawing with Inflexible Edges. Conference on Algorithms and Complexity (CIAC) 2015: 6173