
Tobias Friedrich, Neumann, Frank What's Hot in Evolutionary Computation. Association for the Advancement of Artificial Intelligence (AAAI) 2017: 50645066
We provide a brief overview on some hot topics in the area of evolutionary computation. Our main focus is on recent developments in the areas of combinatorial optimization and realworld applications. Furthermore, we highlight recent progress on the theoretical understanding of evolutionary computing methods.

Benjamin Doerr, Doerr, Carola, Kötzing, Timo Unknown Solution Length Problems With No Asymptotically Optimal Run Time. Genetic and Evolutionary Computation Conference (GECCO) 2017: 13671374
We revisit the problem of optimizing a fitness function of unknown dimension; that is, we face a function defined over bitstrings of large length \(N\), but only \(n << N\) of them have an influence on the fitness. Neither the position of these relevant bits nor their number is known. In previous work, variants of the \((1 + 1)\) evolutionary algorithm (EA) have been developed that solve, for arbitrary \(s \in N\), such OneMax and LeadingOnes instances, simultaneously for all \(n \in N\), in expected time \(O(n(\log(n))^2 log \log(n) dots log^s1 (n)(\log^(s) (n))^1+\epsilon )\)and \(O(n^2 \log(n) log \log(n) dots log^s1 (n)(\log^(s) (n))^1+\epsilon )\), respectively; that is, in almost the same time as if n and the relevant bit positions were known. In this work, we prove the first, almost matching, lower bounds for this setting. For LeadingOnes, we show that, for every \(s \in N\), the \((1 + 1)\) EA with any mutation operator treating zeros and ones equally has an expected run time of \(\omega(n^2 \log(n) log \log(n) dots \log^(s) (n))\) when facing problem size \(n\). Aiming at closing the small remaining gap, we realize that, quite surprisingly, there is no asymptotically best performance. For any algorithm solving, for all \(n\), all instances of size \(n\) in expected time at most \(T (n)\), there is an algorithm doing the same in time \(T'(n)\) with \(T' = o(T )\). For OneMax we show results of similar flavor.

Robert Kovacs, Seufert, Anna, Wall, Ludwig, Chen, HsiangTing, Meinel, Florian, Müller, Willi, You, Sijing, Brehm, Maximilian, Striebel, Jonathan, Kommana, Yannis, Popiak, Alexander, Bläsius, Thomas, Baudisch, Patrick TrussFab: Fabricating Sturdy LargeScale Structures on Desktop 3D Printers. Human Factors in Computing Systems (CHI) 2017: 26062616
We present TrussFab, an integrated endtoend system that allows users to fabricate large scale structures that are sturdy enough to carry human weight. TrussFab achieves the large scale by complementing 3D print with plastic bottles. It does not use these bottles as "bricks" though, but as beams that form structurally sound nodelink structures, also known as trusses, allowing it to handle the forces resulting from scale and load. TrussFab embodies the required engineering knowledge, allowing nonengineers to design such structures and to validate their design using integrated structural analysis. We have used TrussFab to design and fabricate tables and chairs, a 2.5 m long bridge strong enough to carry a human, a functional boat that seats two, and a 5 m diameter dome.

Benjamin Doerr, Neumann, Frank, Sutton, Andrew M. Time Complexity Analysis of Evolutionary Algorithms on Random Satisfiable kCNF Formulas. Algorithmica 2017: 561586
We contribute to the theoretical understanding of randomized search heuristics by investigating their optimization behavior on satisfiable random ksatisfiability instances both in the planted solution model and the uniform model conditional on satisfiability. Denoting the number of variables by n, our main technical result is that the simple (1+1) evolutionary algorithm with high probability finds a satisfying assignment in time \(O(n \log n)\) when the clausevariable density is at least logarithmic. For low density instances, evolutionary algorithms seem to be less effective, and all we can show is a subexponential upper bound on the runtime for densities below \(1/(k(k1))\). We complement these mathematical results with numerical experiments on a broader density spectrum. They indicate that, indeed, the (1+1) EA is less efficient on lower densities. Our experiments also suggest that the implicit constants hidden in our main runtime guarantee are low. Our main result extends and considerably improves the result obtained by Sutton and Neumann (Lect Notes Comput Sci 8672:942951, 2014) in terms of runtime, minimum density, and clause length. These improvements are made possible by establishing a close fitnessdistance correlation in certain parts of the search space. This approach might be of independent interest and could be useful for other averagecase analyses of randomized search heuristics. While the notion of a fitnessdistance correlation has been around for a long time, to the best of our knowledge, this is the first time that fitnessdistance correlation is explicitly used to rigorously prove a performance statement for an evolutionary algorithm.

Tobias Friedrich, Kötzing, Timo, Krejca, Martin S., Sutton, Andrew M. The Compact Genetic Algorithm is Efficient under Extreme Gaussian Noise. IEEE Transactions on Evolutionary Computation 2017: 477490
Practical optimization problems frequently include uncertainty about the quality measure, for example due to noisy evaluations. Thus, they do not allow for a straightforward application of traditional optimization techniques. In these settings, randomized search heuristics such as evolutionary algorithms are a popular choice because they are often assumed to exhibit some kind of resistance to noise. Empirical evidence suggests that some algorithms, such as estimation of distribution algorithms (EDAs) are robust against a scaling of the noise intensity, even without resorting to explicit noisehandling techniques such as resampling. In this paper, we want to support such claims with mathematical rigor. We introduce the concept of graceful scaling in which the run time of an algorithm scales polynomially with noise intensity. We study a monotone fitness function over binary strings with additive noise taken from a Gaussian distribution. We show that myopic heuristics cannot efficiently optimize the function under arbitrarily intense noise without any explicit noisehandling. Furthermore, we prove that using a population does not help. Finally we show that a simple EDA called the compact Genetic Algorithm can overcome the shortsightedness of mutationonly heuristics to scale gracefully with noise. We conjecture that recombinative genetic algorithms also have this property.

Maximilian Katzmann, Komusiewicz, Christian Systematic Exploration of Larger Local Search Neighborhoods for the Minimum Vertex Cover Problem. Association for the Advancement of Artificial Intelligence (AAAI) 2017: 846852
We investigate the potential of exhaustively exploring larger neighborhoods in local search algorithms for Minimum Vertex Cover. More precisely, we study whether, for moderate values of \(k\), it is feasible and worthwhile to determine, given a graph \(G\) with vertex cover \(C\), if there is a \(k\)swap \(S\) such that \((C \setminus S) cup (S \setminus C)\) is a smaller vertex cover of \(G\). First, we describe an algorithm running in \(\Delta^O(k) \cdot n\) time for searching the \(k\)swap neighborhood on \(n\)vertex graphs with maximum degree \(\Delta\). Then, we demonstrate that, by devising additional pruning rules that decrease the size of the search space, this algorithm can be implemented so that it solves the problem quickly for \(k \approx 20\). Finally, we show that it is worthwhile to consider moderatelysized \(k\)swap neighborhoods. For our benchmark data set, we show that when combining our algorithm with a hillclimbing approach, the solution quality improves quickly with the radius \(k\) of the local search neighborhood and that in most cases optimal solutions can be found by setting \(k = 21\).

Ankit Chauhan, Lenzner, Pascal, Melnichenko, Anna, Molitor, Louise Selfish Network Creation with NonUniform Edge Cost. Symposium on Algorithmic Game Theory (SAGT) 2017

Wanru Gao, Friedrich, Tobias, Kötzing, Timo, Neumann, Frank Scaling up Local Search for Minimum Vertex Cover in Large Graphs by Parallel Kernelization. Australasian Conference on Artificial Intelligence (AUSAI) 2017: 131143
We investigate how wellperforming local search algorithms for small or medium size instances can be scaled up to perform well for large inputs. We introduce a parallel kernelization technique that is motivated by the assumption that graphs in medium to large scale are composed of components which are on their own easy for stateoftheart solvers but when hidden in large graphs are hard to solve. To show the effectiveness of our kernelization technique, we consider the wellknown minimum vertex cover problem and two stateoftheart solvers called NuMVC and FastVC. Our kernelization approach reduces an existing large problem instance significantly and produces better quality results on a wide range of benchmark instances and real world graphs.

Tobias Friedrich, Kötzing, Timo, Quinzan, Francesco, Sutton, Andrew Michael Resampling vs Recombination: a Statistical Run Time Estimation. Foundations of Genetic Algorithms (FOGA) 2017: 2535
Noise is pervasive in realworld optimization, but there is still little understanding of the interplay between the operators of randomized search heuristics and explicit noisehandling techniques, such as statistical resampling. In this paper, we report on several statistical models and theoretical results that help to clarify this reciprocal relationship for a collection of randomized search heuristics on noisy functions. We consider the optimization of pseudoBoolean functions under additive posterior Gaussian noise and explore the tradeo between noise reduction and the computational cost of resampling. We first perform experiments to find the optimal parameters at a given noise intensity for a mutationonly evolutionary algorithm, a genetic algorithm employing recombination, an estimation of distribution algorithm (EDA), and an ant colony optimization algorithm. We then observe how the optimal parameter depends on the noise intensity for the different algorithms. Finally, we locate the point where statistical resampling costs more than it is worth in terms of run time. We find that the EA requires the highest number of resamples to obtain the best speedup, whereas crossover reduces both the run time and the number of resamples required. Most surprisingly, we find that EDAlike algorithms require no resampling, and can handle noise implicitly.

Feng Shi, Schirneck, Martin, Friedrich, Tobias, Kötzing, Timo, Neumann, Frank Reoptimization Times of Evolutionary Algorithms on Linear Functions Under Dynamic Uniform Constraints. Genetic and Evolutionary Computation Conference (GECCO) 2017: 14071414
Thee investigations of linear pseudoBoolean functions play a central role in the area of runtime analysis of evolutionary computing techniques. Having an additional linear constraint on a linear function is equivalent to the NPhard knapsack problem and special problem classes thereof have been investigated in recent works. In this paper, we extend these studies to problems with dynamic constraints and investigate the runtime of different evolutionary algorithms to recompute an optimal solution when the constraint bound changes by a certain amount. We study the classical \((1+1)\) EA and populationbased algorithms and show that they recompute an optimal solution very efficiently. Furthermore, we show that a variant of the \((1+(\lambda, \lambda))\) GA can recompute the optimal solution more efficiently in some cases.

Tobias Friedrich, Krohmer, Anton, Rothenberger, Ralf, Sutton, Andrew M. Phase Transitions for ScaleFree SAT Formulas. Association for the Advancement of Artificial Intelligence (AAAI) 2017: 38933899
Recently, a number of nonuniform random satisfiability models have been proposed that are closer to practical satisfiability problems in some characteristics. In contrast to uniform random Boolean formulas, scalefree formulas have a variable occurrence distribution that follows a power law. It has been conjectured that such a distribution is a more accurate model for some industrial instances than the uniform random model. Though it seems that there is already an awareness of a threshold phenomenon in such models, there is still a complete picture lacking. In contrast to the uniform model, the critical density threshold does not lie at a single point, but instead exhibits a functional dependency on the powerlaw exponent. For scalefree formulas with clauses of length \(k = 2\), we give a lower bound on the phase transition threshold as a function of the scaling parameter. We also perform computational studies that suggest our bound is tight and investigate the critical density for formulas with higher clause lengths. Similar to the uniform model, on formulas with \(k \geq 3\), we find that the phase transition regime corresponds to a set of formulas that are difficult to solve by backtracking search.

Mojgan Pourhassan, Friedrich, Tobias, Neumann, Frank On the Use of the Dual Formulation for Minimum Weighted Vertex Cover in Evolutionary Algorithms. Foundations of Genetic Algorithms (FOGA) 2017: 3744
We consider the weighted minimum vertex cover problem and investigate how its dual formulation can be exploited to design evolutionary algorithms that provably obtain a 2approximation. Investigating multivalued representations, we show that variants of randomized local search and the (1+1) EA achieve this goal in expected pseudopolynomial time. In order to speed up the process, we consider the use of step size adaptation in both algorithms and show that RLS obtains a 2approximation in expected polynomial time while the (1+1) EA still encounters a pseudopolynomial lower bound.

Evangelos Bampas, Göbel, AndreasNikolas, Pagourtzis, Aris, Tentes, Aris On the connection between interval size functions and path counting. Computational Complexity 2017: 421467

Timo Kötzing, Schirneck, Martin, Seidel, Karen Normal Forms in Semantic Language Identification. Algorithmic Learning Theory (ALT) 2017
We consider language learning in the limit from text where all learning restrictions are semantic, that is, where any conjecture may be replaced by a semantically equivalent conjecture. For different such learning criteria, starting with the wellknown TxtGBclearning, we consider three different normal forms: strongly locking learning, consistent learning and (partially) setdriven learning. These normal forms support and simplify proofs and give insight into what behaviors are necessary for successful learning (for example when consistency in conservative learning implies cautiousness and strong decisiveness). We show that strongly locking learning can be assumed for partially setdriven learners, even when learning restrictions apply. We give a very general proof relying only on a natural property of the learning restriction, namely, allowing for simulation on equivalent text. Furthermore, when no restrictions apply, also the converse is true: every strongly locking learner can be made partially setdriven. For several semantic learning criteria we show that learning can be done consistently. Finally, we deduce for which learning restrictions partial setdrivenness and setdrivenness coincide, including a general statement about classes of infinite languages. The latter again relies on a simulation argument.

S. Anand, Bringmann, Karl, Friedrich, Tobias, Garg, Naveen, Kumar, Amit Minimizing Maximum (Weighted) FlowTime on Related and Unrelated Machines. Algorithmica 2017: 515536
In this paper we initiate the study of job scheduling on related and unrelated machines so as to minimize the maximum flow time or the maximum weighted flow time (when each job has an associated weight). Previous work for these metrics considered only the setting of parallel machines, while previous work for scheduling on unrelated machines only considered \(L_p, p < \infty\) norms. Our main results are: (1) we give an \(O(\epsilon^3)\)competitive algorithm to minimize maximum weighted flow time on related machines where we assume that the machines of the online algorithm can process \(1+\epsilon\) units of a job in 1 timeunit (\(\epsilon\) speed augmentation). (2) For the objective of minimizing maximum flow time on unrelated machines we give a simple \(2/\epsilon\)competitive algorithm when we augment the speed by \(\epsilon\). For \(m\) machines we show a lower bound of \(\Omega(m)\) on the competitive ratio if speed augmentation is not permitted. Our algorithm does not assign jobs to machines as soon as they arrive. To justify this "drawback" we show a lower bound of \(\Omega(\log m)\) on the competitive ratio of immediate dispatch algorithms. In both these lower bound constructions we use jobs whose processing times are in \(\1,\infty\\), and hence they apply to the more restrictive subset parallel setting. (3) For the objective of minimizing maximum weighted flow time on unrelated machines we establish a lower bound of \(\Omega(\log m)\)on the competitive ratio of any online algorithm which is permitted to use \(s = O(1)\) speed machines. In our lower bound construction, job \(j\) has a processing time of \(p_j\) on a subset of machines and infinity on others and has a weight \(1/p_j\). Hence this lower bound applies to the subset parallel setting for the special case of minimizing maximum stretch.

Martin S. Krejca, Witt, Carsten Lower Bounds on the Run Time of the Univariate Marginal Distribution Algorithm on OneMax. Foundations of Genetic Algorithms (FOGA) 2017: 6579
The Univariate Marginal Distribution Algorithm (UMDA), a popular estimation of distribution algorithm, is studied from a run time perspective. On the classical OneMax benchmark function, a lower bound of \(Omega (\mu \sqrt n + n \log n)\), where \(\mu\) is the population size, on its expected run time is proved. This is the first direct lower bound on the run time of the UMDA. It is stronger than the bounds that follow from general blackbox complexity theory and is matched by the run time of many evolutionary algorithms. The results are obtained through advanced analyses of the stochastic change of the frequencies of bit values maintained by the algorithm, including carefully designed potential functions. These techniques may prove useful in advancing the field of run time analysis for estimation of distribution algorithms in general.

Benjamin Doerr, Fischbeck, Philipp, Frahnow, Clemens, Friedrich, Tobias, Kötzing, Timo, Schirneck, Martin Island Models Meet Rumor Spreading. Genetic and Evolutionary Computation Conference (GECCO) 2017: 13591366
Island models in evolutionary computation solve problems by a careful interplay of independently running evolutionary algorithms on the island and an exchange of good solutions between the islands. In this work, we conduct rigorous run time analyses for such island models trying to simultaneously obtain good run times and low communication effort. We improve the existing upper bounds for the communication effort (i) by improving the run time bounds via a careful analysis, (ii) by setting the balance between individual computation and communication in a more appropriate manner, and (iii) by replacing the usual communicatewithallneighbors approach with randomized rumor spreading, where each island contacts a randomly chosen neighbor. This epidemic communication paradigm is known to lead to very fast and robust information dissemination in many applications. Our results concern islands running simple (1+1) evolutionary algorithms, we regard ddimensional tori and complete graphs as communication topologies, and optimize the classic test functions OneMax and LeadingOnes.

Markus Wagner, Friedrich, Tobias, Lindauer, Marius Improving local search in a minimum vertex cover solver for classes of networks. Congress on Evolutionary Computation (CEC) 2017
For the minimum vertex cover problem, a wide range of solvers has been proposed over the years. Most classical exact approaches are encountering run time issues on massive graphs that are considered nowadays. A straightforward alternative approach is then to use heuristics, which make assumptions about the structure of the studied graphs. These assumptions are typically hardcoded and are hoped to work well for a wide range of networks  which is in conflict with the nature of broad benchmark sets. With this article, we contribute in two ways. First, we identify a component in an existing solver that influences its performance depending on the class of graphs, and we then customize instances of this solver for different classes of graphs. Second, we create the first algorithm portfolio for the minimum vertex cover to further improve the performance of a single integrated approach to the minimum vertex cover problem.

Thomas Bläsius, Radermacher, Marcel, Rutter, Ignaz How to Draw a Planarization. Software Seminar (SOFSEM) 2017: 295308
We study the problem of computing straightline drawings of nonplanar graphs with few crossings. We assume that a crossingminimization algorithm is applied first, yielding a planarization, i.e., a planar graph with a dummy vertex for each crossing, that fixes the topology of the resulting drawing. We present and evaluate two different approaches for drawing a planarization in such a way that the edges of the input graph are as straight as possible. The first approach is based on the planaritypreserving forcedirected algorithm ImPrEd, the second approach, which we call Geometric Planarization Drawing, iteratively moves vertices to their locally optimal positions in the given initial drawing.

DucCuong Dang, Friedrich, Tobias, Kötzing, Timo, Krejca, Martin S., Lehre, Per Kristian, Oliveto, Pietro S., Sudholt, Dirk, Sutton, Andrew M. Escaping Local Optima Using Crossover with Emergent Diversity. IEEE Transactions on Evolutionary Computation 2017
Population diversity is essential for the effective use of any crossover operator. We compare seven commonly used diversity mechanisms and prove rigorous run time bounds for the \((\mu+1)\) GA using uniform crossover on the fitness function \(Jump_k\). All previous results in this context only hold for unrealistically low crossover probability \(p_c=O(k/n)\), while we give analyses for the setting of constant \(p_c < 1\) in all but one case. Our bounds show a dependence on the problem size \(n\), the jump length \(k\), the population size \(\mu\), and the crossover probability \(p_c\). For the typical case of constant \(k > 2\) and constant \(p_c\), we can compare the resulting expected optimisation times for different diversity mechanisms assuming an optimal choice of \(\mu\): \(O(n^k1)\) for duplicate elimination/minimisation, \(O(n^2 \log n)\) for maximising the convex hull, \(O(n \log n)\) for det. crowding (assuming \(p_c = k/n\)), \(O(n \log n)\) for maximising the Hamming distance, \(O(n \log n)\) for fitness sharing, \(O(n \log n)\) for the singlereceiver island model. This proves a sizeable advantage of all variants of the \((\mu+1)\) GA compared to the (1+1) EA, which requires \(\Theta(n^k)\). In a short empirical study we confirm that the asymptotic differences can also be observed experimentally.

Tobias Friedrich, Ihde, Sven, Keßler, Christoph, Lenzner, Pascal, Neubert, Stefan, Schumann, David Efficient Best Response Computation for Strategic Network Formation under Attack. Symposium on Algorithmic Game Theory (SAGT) 2017
Inspired by real world examples, e.g. the Internet, researchers have introduced an abundance of strategic games to study natural phenomena in networks. Unfortunately, almost all of these games have the conceptual drawback of being computationally intractable, i.e. computing a best response strategy or checking if an equilibrium is reached is NPhard. Thus, a main challenge in the field is to find tractable realistic network formation models. We address this challenge by investigating a very recently introduced model by Goyal et al. [WINE'16] which focuses on robust networks in the presence of a strong adversary who attacks (and kills) nodes in the network and lets this attack spread viruslike to neighboring nodes and their neighbors. Our main result is to establish that this natural model is one of the few exceptions which are both realistic and computationally tractable. In particular, we answer an open question of Goyal et al. by providing an efficient algorithm for computing a best response strategy, which implies that deciding whether the game has reached a Nash equilibrium can be done efficiently as well. Our algorithm essentially solves the problem of computing a minimal connection to a network which maximizes the reachability while hedging against severe attacks on the network infrastructure and may thus be of independent interest.

Andreas Göbel Counting, Modular Counting and Graph Homomorphisms. 2017

Thomas Bläsius, Friedrich, Tobias, Krohmer, Anton Cliques in Hyperbolic Random Graphs. Algorithmica 2017
Most complex real world networks display scalefree features. This characteristic motivated the study of numerous random graph models with a powerlaw degree distribution. There is, however, no established and simple model which also has a high clustering of vertices as typically observed in real data. Hyperbolic random graphs bridge this gap. This natural model has recently been introduced by hyprg and has shown theoretically and empirically to fulfill all typical properties of real world networks, including powerlaw degree distribution and high clustering. We study cliques in hyperbolic random graphs \(G\) and present new results on the expected number of \(k\)cliques \(E(K_k)\) and the size of the largest clique \(\omega(G)\). We observe that there is a phase transition at powerlaw exponent \(\beta = 3\). More precisely, for \(beta in (2,3)\) we prove \(E(K_k)=n^k (3\beta)/2 \Theta(k)^k\) and \(\omega(G)=\Theta(n^(3\beta)/2)\), while for \(\beta\geq3\) we prove \(E(K_k)=n \Theta(k)^k\) and \(\omega(G)=\Theta(\log(n)/ \log \log n)\). Furthermore, we show that for \(\beta \geq 3\), cliques in hyperbolic random graphs can be computed in time \(O(n)\). If the underlying geometry is known, cliques can be found with worstcase runtime \(O(m \cdot n^2.5)\) for all values of \(\beta\).

Tobias Friedrich, Ihde, Sven, Keßler, Christoph, Lenzner, Pascal, Neubert, Stefan, Schumann, David Brief Announcement: Efficient Best Response Computation for Strategic Network Formation under Attack. Symposium on Parallelism in Algorithms and Architectures (SPAA) 2017: 321323
Inspired by real world examples, e.g. the Internet, researchers have introduced an abundance of strategic games to study natural phenomena in networks. Unfortunately, almost all of these games have the conceptual drawback of being computationally intractable, i.e. computing a best response strategy or checking if an equilibrium is reached is NPhard. Thus, a main challenge in the field is to find tractable realistic network formation models. We address this challenge by establishing that the recently introduced model by Goyal et al.[WINE'16], which focuses on robust networks in the presence of a strong adversary, is a rare exception which is both realistic and computationally tractable. In particular, we sketch an efficient algorithm for computing a best response strategy, which implies that deciding whether the game has reached a Nash equilibrium can be done efficiently as well. Our algorithm essentially solves the problem of computing a minimal connection to a network which maximizes the reachability while hedging against severe attacks on the network infrastructure.

Tobias Friedrich, Krohmer, Anton, Rothenberger, Ralf, Sauerwald, Thomas, Sutton, Andrew M. Bounds on the Satisfiability Threshold for Power Law Distributed Random SAT. European Symposium on Algorithms (ESA) 2017: 37:137:15
Propositional satisfiability (SAT) is one of the most fundamental problems in computer science. The worstcase hardness of SAT lies at the core of computational complexity theory. The averagecase analysis of SAT has triggered the development of sophisticated rigorous and nonrigorous techniques for analyzing random structures. Despite a long line of research and substantial progress, nearly all theoretical work on random SAT assumes a uniform distribution on the variables. In contrast, realworld instances often exhibit large fluctuations in variable occurrence. This can be modeled by a scalefree distribution of the variables, which results in distributions closer to industrial SAT instances. We study random \(k\)SAT on \(n\) variables, \(m=\Theta(n)\) clauses, and a power law distribution on the variable occurrences with exponent \(\beta\). We observe a satisfiability threshold at \(\beta=(2k1)/(k1)\). This threshold is tight in the sense that instances with \(beta < (2k1)/(k1)\varepsilon\) for any constant \(\varepsilon>0\) are unsatisfiable with high probability (w.h.p.). For \(\beta\ge(2k1)/(k1)+\varepsilon\), the picture is reminiscent of the uniform case: instances are satisfiable w.h.p. for sufficiently small constant clausevariable ratios \(m/n\); they are unsatisfiable above a ratio \(m/n\) that depends on \(\beta\).

Benjamin Doerr, Kötzing, Timo, Lagodzinski, J. A. Gregor, Lengler, Johannes Bounding Bloat in Genetic Programming. Genetic and Evolutionary Computation Conference (GECCO) 2017: 921928
While many optimization problems work with a fixed number of decision variables and thus a fixedlength representation of possible solutions, genetic programming (GP) works on variablelength representations. A naturally occurring problem is that of bloat (unnecessary growth of solutions) slowing down optimization. Theoretical analyses could so far not bound bloat and required explicit assumptions on the magnitude of bloat. In this paper we analyze bloat in mutationbased genetic programming for the two test functions ORDER and MAJORITY. We overcome previous assumptions on the magnitude of bloat and give matching or closetomatching upper and lower bounds for the expected optimization time. In particular, we show that the \((1+1)\) GP takes (i) \(\Theta(T_\textinit + n\, \log n)\) iterations with bloat control on ORDER as well as MAJORITY; and (ii) \(O(T_textinit ,log ,T_textinit + n(\log n)^3)\) and \(\Omega(T_\textinit + n \,\log n)\) (and \(\Omega(T_\textinit \,\log \,T_\textinit)\) for \(n = 1\)) iterations without bloat control on MAJORITY.

Rupert Hölzl, Jain, Sanjay, Schlicht, Philipp, Seidel, Karen, Stephan, Frank Automatic Learning from Repetitive Texts. Algorithmic Learning Theory (ALT) 2017

Ankit Chauhan, Friedrich, Tobias, Quinzan, Francesco Approximating Optimization Problems using EAs on ScaleFree Networks. Genetic and Evolutionary Computation Conference (GECCO) 2017: 235242
It has been experimentally observed that realworld networks follow certain topologicalproperties, such as smallworld, powerlaw etc. To study these networks, many random graph models, such as Preferential Attachment, have been proposed. In this paper, we consider the deterministic properties which capture powerlaw degree distribution and degeneracy. Networks with these properties are known as scalefree networks in the literature. Many interesting problems remain NPhard on scalefree networks. We study the relationship between scalefree properties and the approximationratio of some commonly used evolutionary algorithms. For the Vertex Cover, we observe experimentally that the \((1+1)\) EA always gives the better result than a greedy local search, even when it runs for only \(O(n, log(n))\) steps. We give the construction of a scalefree network in which a multiobjective algorithm and a greedy algorithm obtain optimal solutions, while the \((1+1)\) EA obtains the worst possible solution with constant probability. We prove that for the Dominating Set, Vertex Cover, Connected Dominating Set and Independent Set, the \((1+1)\) EA obtains constantfactor approximation in expected run time \(O(n, log(n))\) and \(O(n^4)\) respectively. Whereas, GSEMO gives even better approximation than \((1+1)\) EA in expected run time \(O(n^3)\) for Dominating Set, Vertex Cover and Connected Dominating Set on such networks.

Tobias Friedrich, Kötzing, Timo, Melnichenko, Anna Analyzing Search Heuristics with Differential Equations. Genetic and Evolutionary Computation Conference (GECCO) 2017: 313314
Drift Theory is currently the most common technique for the analysis of randomized search heuristics because of its broad applicability and the resulting tight first hitting time bounds. The biggest problem when applying a drift theorem is to find a suitable potential function which maps a complex space into a single number, capturing the essence of the state of the search in just one value. We discuss another method for the analysis of randomized search heuristics based on the Theory of Differential Equations. This method considers the deterministic counterpart of the randomized process by replacing probabilistic outcomes by their expectation, and then bounding the error with good probability. We illustrate this by analyzing an Ant Colony Optimization algorithm (ACO) for the Minimum Spanning Tree problem (MST).

Tobias Friedrich, Kötzing, Timo, Lagodzinski, J. A. Gregor, Neumann, Frank, Schirneck, Martin Analysis of the (1+1) EA on Subclasses of Linear Functions under Uniform and Linear Constraints. Foundations of Genetic Algorithms (FOGA) 2017: 4554
Linear functions have gained a lot of attention in the area of run time analysis of evolutionary computation methods and the corresponding analyses have provided many effective tools for analyzing more complex problems. In this paper, we consider the behavior of the classical (1+1) Evolutionary Algorithm for linear functions under linear constraint. We show tight bounds in the case where both the objective function and the constraint is given by the OneMax function and present upper bounds as well as lower bounds for the general case. Furthermore, we also consider the LeadingOnes fitness function.

Andreas Galanis, Göbel, Andreas, Goldberg, Leslie Ann, Lapinskas, John, Richerby, David Amplifiers for the Moran Process. Journal of the ACM 2017: 5:15:90
The Moran process, as studied by Lieberman, Hauert, and Nowak, is a randomised algorithm modelling the spread of genetic mutations in populations. The algorithm runs on an underlying graph where individuals correspond to vertices. Initially, one vertex (chosen uniformly at random) possesses a mutation, with fitness \(r > 1\). All other individuals have fitness 1. During each step of the algorithm, an individual is chosen with probability proportional to its fitness, and its state (mutant or nonmutant) is passed on to an outneighbour which is chosen uniformly at random. If the underlying graph is strongly connected, then the algorithm will eventually reach fixation, in which all individuals are mutants, or extinction, in which no individuals are mutants. An infinite family of directed graphs is said to be strongly amplifying if, for every \(r > 1\), the extinction probability tends to 0 as the number of vertices increases. A formal definition is provided in the article. Strong amplification is a rather surprising propertyit means that in such graphs, the fixation probability of a uniformly placed initial mutant tends to 1 even though the initial mutant only has a fixed selective advantage of \(r > 1\) (independently of \(n\)). The name "strongly amplifying" comes from the fact that this selective advantage is "amplified." Strong amplifiers have received quite a bit of attention, and Lieberman et al. proposed two potentially strongly amplifying familiessuperstars and metafunnels. Heuristic arguments have been published, arguing that there are infinite families of superstars that are strongly amplifying. The same has been claimed for metafunnels. In this article, we give the first rigorous proof that there is an infinite family of directed graphs that is strongly amplifying. We call the graphs in the family "megastars." When the algorithm is run on an \(n\)vertex graph in this family, starting with a uniformly chosen mutant, the extinction probability is roughly \(n  1/2\) (up to logarithmic factors). We prove that all infinite families of superstars and metafunnels have larger extinction probabilities (as a function of \(n\)). Finally, we prove that our analysis of megastars is fairly tight  there is no infinite family of megastars such that the Moran algorithm gives a smaller extinction probability (up to logarithmic factors). Also, we provide a counterexample which clarifies the literature concerning the isothermal theorem of Lieberman et al.

Tobias Friedrich, Kötzing, Timo, Wagner, Markus A Generic BetandRun Strategy for Speeding Up Stochastic Local Search. Association for the Advancement of Artificial Intelligence (AAAI) 2017: 801807
A common strategy for improving optimization algorithms is to restart the algorithm when it is believed to be trapped in an inferior part of the search space. However, while specific restart strategies have been developed for specific problems (and specific algorithms), restarts are typically not regarded as a general tool to speed up an optimization algorithm. In fact, many optimization algorithms do not employ restarts at all. Recently, "betandrun" was introduced in the context of mixedinteger programming, where first a number of short runs with randomized initial conditions is made, and then the most promising run of these is continued. In this article, we consider two classical NPcomplete combinatorial optimization problems, traveling salesperson and minimum vertex cover, and study the effectiveness of different betandrun strategies. In particular, our restart strategies do not take any problem knowledge into account, nor are tailored to the optimization algorithm. Therefore, they can be used offtheshelf. We observe that stateoftheart solvers for these problems can benefit significantly from restarts on standard benchmark instances.

Timo Kötzing, Schirneck, Martin Towards an Atlas of Computational Learning Theory. Symposium on Theoretical Aspects of Computer Science (STACS) 2016: 47:147:13
A major part of our knowledge about Computational Learning stems from comparisons of the learning power of different learning criteria. These comparisons inform about tradeoffs between learning restrictions and, more generally, learning settings; furthermore, they inform about what restrictions can be observed without losing learning power. With this paper we propose that one main focus of future research in Computational Learning should be on a structured approach to determine the relations of different learning criteria. In particular, we propose that, for small sets of learning criteria, all pairwise relations should be determined; these relations can then be easily depicted as a map, a diagram detailing the relations. Once we have maps for many relevant sets of learning criteria, the collection of these maps is an Atlas of Computational Learning Theory, informing at a glance about the landscape of computational learning just as a geographical atlas informs about the earth. In this paper we work toward this goal by providing three example maps, one pertaining to partially setdriven learning, and two pertaining to strongly monotone learning. These maps can serve as blueprints for future maps of similar base structure.

John Case, Kötzing, Timo Topological Separations in Inductive Inference. Theoretical Computer Science 2016: 3345
Re learning in the limit from positive data, a major concern is which classes of languages are learnable with respect to a given learning criterion. We are particularly interested herein in the reasons for a class of languages to be unlearnable. We consider two types of reasons. One type is called topological where it does not help if the learners are allowed to be uncomputable (an example of Gold's is that no class containing an infinite language and all its finite sublanguages is learnable  even by an uncomputable learner). Another reason is called computational (where the learners are required to be algorithmic). In particular, two learning criteria might allow for learning different classes of languages from one another  but with dependence on whether the unlearnability is of type topological or computational. In this paper we formalize the idea of two learning criteria separating topologically in learning power. This allows us to study more closely why two learning criteria separate in learning power. For a variety of learning criteria, concerning vacillatory, monotone, (several kinds of) iterative and feedback learning, we show that certain learning criteria separate topologically, and certain others, which are known to separate, are shown not to separate topologically. Showing that learning criteria do not separate topologically implies that any known separation must necessarily exploit algorithmicity of the learner.

Benjamin Doerr, Doerr, Carola, Kötzing, Timo The Right Mutation Strength for MultiValued Decision Variables. Genetic and Evolutionary Computation Conference (GECCO) 2016: 11151122
The most common representation in evolutionary computation are bit strings. This is ideal to model binary decision variables, but less useful for variables taking more values. With very little theoretical work existing on how to use evolutionary algorithms for such optimization problems, we study the run time of simple evolutionary algorithms on some OneMaxlike functions defined over \(\Omega=\0,1,\dots,r1\n\). More precisely, we regard a variety of problem classes requesting the componentwise minimization of the distance to an unknown target vector \(z \in \Omega\). For such problems we see a crucial difference in how we extend the standardbit mutation operator to these multivalued domains. While it is natural to select each position of the solution vector to be changed independently with probability \(1/n\), there are various ways to then change such a position. If we change each selected position to a random value different from the original one, we obtain an expected run time of \(\Theta(nr\log n)\). If we change each selected position by either +1 or 1 (random choice), the optimization time reduces to \(\Theta(nr+n\log n)\). If we use a random mutation strength \(i \in \0,1,\dots,r1\n\) with probability inversely proportional to \(i\) and change the selected position by either +\(i\) or \(i\) (random choice), then the optimization time becomes \(\Theta(n\log(r)(\log(n)+\log(r)))\), bringing down the dependence on \(r\) from linear to polylogarithmic. One of our results depends on a new variant of the lower bounding multiplicative drift theorem.

Thomas Bläsius, Friedrich, Tobias, Schirneck, Martin The Parameterized Complexity of Dependency Detection in Relational Databases. International Symposium on Parameterized and Exact Computation (IPEC) 2016: 6:16:13
We study the parameterized complexity of classical problems that arise in the profiling of relational data. Namely, we characterize the complexity of detecting unique column combinations (candidate keys), functional dependencies, and inclusion dependencies with the solution size as parameter. While the discovery of uniques and functional dependencies, respectively, turns out to be W[2]complete, the detection of inclusion dependencies is one of the first natural problems proven to be complete for the class W[3]. As a side effect, our reductions give insights into the complexity of enumerating all minimal unique column combinations or functional dependencies.

Tobias Friedrich, Kötzing, Timo, Krejca, Martin S., Sutton, Andrew M. The Benefit of Recombination in Noisy Evolutionary Search. Genetic and Evolutionary Computation Conference (GECCO) 2016: 161162
Practical optimization problems frequently include uncertainty about the quality measure, for example due to noisy evaluations. Thus, they do not allow for a straightforward application of traditional optimization techniques. In these settings, randomized search heuristics such as evolutionary algorithms are a popular choice because they are often assumed to exhibit some kind of resistance to noise. Empirical evidence suggests that some algorithms, such as estimation of distribution algorithms (EDAs) are robust against a scaling of the noise intensity, even without resorting to explicit noisehandling techniques such as resampling. In this paper, we want to support such claims with mathematical rigor. We introduce the concept of graceful scaling in which the run time of an algorithm scales polynomially with noise intensity. We study a monotone fitness function over binary strings with additive noise taken from a Gaussian distribution. We show that myopic heuristics cannot efficiently optimize the function under arbitrarily intense noise without any explicit noisehandling. Furthermore, we prove that using a population does not help. Finally we show that a simple EDA called the Compact Genetic Algorithm can overcome the shortsightedness of mutationonly heuristics to scale gracefully with noise. We conjecture that recombinative genetic algorithms also have this property.

Andrew M. Sutton Superpolynomial Lower Bounds for the (1+1) EA on Some Easy Combinatorial Problems. Algorithmica 2016: 507528
The (1+1) EA is a simple evolutionary algorithm that is known to be efficient on linear functions and on some combinatorial optimization problems. In this paper, we rigorously study its behavior on two easy combinatorial problems: finding the 2coloring of a class of bipartite graphs, and constructing satisfying assignments for a class of satisfiable 2CNF Boolean formulas. We prove that it is inefficient on both problems in the sense that the number of iterations the algorithm needs to minimize the cost functions is superpolynomial with high probability. Our motivation is to better understand the influence of problem instance structure on the runtime character of a simple evolutionary algorithm. We are interested in what kind of structural features give rise to socalled metastable states at which, with probability \(1  o(1)\), the (1+1) EA becomes trapped and subsequently has difficulty leaving. Finally, we show how to modify the (1+1) EA slightly in order to obtain a polynomialtime performance guarantee on both problems.

John Case, Kötzing, Timo Strongly nonUshaped language learning results by general techniques. Information and Computation 2016: 115
In learning, a semantic or behavioral Ushape occurs when a learner first learns, then unlearns, and, finally, relearns, some target concept. This paper introduces two general techniques and applies them especially to syntactic Ushapes in learning: one technique to show when they are necessary and one to show when they are unnecessary. The technique for the former is very general and applicable to a much wider range of learning criteria. It employs socalled selflearning classes of languages which are shown to characterize completely one criterion learning more than another. We apply these techniques to show that, for setdriven and rearrangementindependent learning, any kind of Ushapes is unnecessary. Furthermore, we show that Ushapes are necessary in a strong way for iterative learning, contrasting with an earlier result by Case and Moelius that semantic Ushapes are unnecessary for iterative learning.

Thomas Bläsius, Rutter, Ignaz Simultaneous PQOrdering with Applications to Constrained Embedding Problems. Transactions on Algorithms 2016: 16
In this article, we define and study the new problem of SIMULTANEOUS PQORDERING. Its input consists of a set of PQtrees, which represent sets of circular orders of their leaves, together with a set of childparent relations between these PQtrees, such that the leaves of the child form a subset of the leaves of the parent. SIMULTANEOUS PQORDERING asks whether orders of the leaves of each of the trees can be chosen simultaneously; that is, for every childparent relation, the order chosen for the parent is an extension of the order chosen for the child. We show that SIMULTANEOUS PQORDERING is NPcomplete in general, and we identify a family of instances that can be solved efficiently, the 2fixed instances. We show that this result serves as a framework for several other problems that can be formulated as instances of SIMULTANEOUS PQORDERING. In particular, we give lineartime algorithms for recognizing simultaneous interval graphs and extending partial interval representations. Moreover, we obtain a lineartime algorithm for PARTIALLY PQCONSTRAINED PLANARITY for biconnected graphs, which asks for a planar embedding in the presence of 16 PQtrees that restrict the possible orderings of edges around vertices, and a quadratictime algorithm for SIMULTANEOUS EMBEDDING WITH FIXED EDGES for biconnected graphs with a connected intersection. Both results can be extended to the case where the input graphs are not necessarily biconnected but have the property that each cutvertex is contained in at most two nontrivial blocks. This includes, for example, the case where both graphs have a maximum degree of 5.

Tobias Friedrich ScaleFree Networks, Hyperbolic Geometry, and Efficient Algorithms. Symposium on Mathematical Foundations of Computer Science (MFCS) 2016: 4:14:3
The node degrees of large realworld networks often follow a powerlaw distribution. Such scalefree networks can be social networks, internet topologies, the web graph, power grids, or many other networks from literally hundreds of domains. The talk will introduce several mathematical models of scalefree networks (e.g. preferential attachment graphs, ChungLu graphs, hyperbolic random graphs) and analyze some of their properties (e.g. diameter, average distance, clustering). We then present several algorithms and distributed processes on and for these network models (e.g. rumor spreading, load balancing, deanonymization, embedding) and discuss a number of open problems. The talk assumes no prior knowledge about scalefree networks, distributed computing or hyperbolic geometry.

Moritz Baum, Bläsius, Thomas, Gemsa, Andreas, Rutter, Ignaz, Wegner, Franziska Scalable Exact Visualization of Isocontours in Road Networks via MinimumLink Paths. European Symposium on Algorithms (ESA) 2016: 7:17:18
Isocontours in road networks represent the area that is reachable from a source within a given resource limit. We study the problem of computing accurate isocontours in realistic, largescale networks. We propose isocontours represented by polygons with minimum number of segments that separate reachable and unreachable components of the network. Since the resulting problem is not known to be solvable in polynomial time, we introduce several heuristics that run in (almost) linear time and are simple enough to be implemented in practice. A key ingredient is a new practical lineartime algorithm for minimumlink paths in simple polygons. Experiments in a challenging realistic setting show excellent performance of our algorithms in practice, computing nearoptimal solutions in a few milliseconds on average, even for long ranges.

Christian Gießen, Kötzing, Timo Robustness of Populations in Stochastic Environments. Algorithmica 2016: 462489
We consider stochastic versions of OneMax and LeadingOnes and analyze the performance of evolutionary algorithms with and without populations on these problems. It is known that the (1+1) EA on OneMax performs well in the presence of very small noise, but poorly for higher noise levels. We extend these results to LeadingOnes and to many different noise models, showing how the application of drift theory can significantly simplify and generalize previous analyses. Most surprisingly, even small populations (of size \(\Theta(\log n))\) can make evolutionary algorithms perform well for high noise levels, well outside the abilities of the (1+1) EA. Larger population sizes are even more beneficial; we consider both parent and offspring populations. In this sense, populations are robust in these stochastic settings.

Tobias Friedrich, Kötzing, Timo, Krejca, Martin S., Sutton, Andrew M. Robustness of Ant Colony Optimization to Noise. Evolutionary Computation 2016: 237254
Recently Ant Colony Optimization (ACO) algorithms have been proven to be efficient in uncertain environments, such as noisy or dynamically changing fitness functions. Most of these analyses focus on combinatorial problems, such as path finding. We analyze an ACO algorithm in a setting where we try to optimize the simple OneMax test function, but with additive posterior noise sampled from a Gaussian distribution. Without noise the classical \((\mu+1)\)EA outperforms any ACO algorithm, with smaller \(\mu\) being better; however, with large noise, the \((\mu+1)\)EA fails, even for high values of \(\mu\) (which are known to help against small noise). In this paper we show that ACO is able to deal with arbitrarily large noise in a graceful manner, that is, as long as the evaporation factor \(p\) is small enough dependent on the parameter \(\delta^2\) of the noise and the dimension \(n\) of the search space \((p = o(1/(n(n + \delta \log n)^2 \log n)))\), optimization will be successful.

Benjamin Doerr, Doerr, Carola, Kötzing, Timo Provably Optimal SelfAdjusting Step Sizes for MultiValued Decision Variables. Parallel Problem Solving From Nature (PPSN) 2016: 782791
We regard the problem of maximizing a OneMaxlike function defined over an alphabet of size \(r\). In previous work [GECCO 2016] we have investigated how three different mutation operators influence the performance of Randomized Local Search (RLS) and the (1+1) Evolutionary Algorithm. This work revealed that among these natural mutation operators none is superior to the other two for any choice of \(r\). We have also given in [GECCO 2016] some indication that the best achievable run time for large \(r\) is \(\Theta(n log r(\log n + \log r))\), regardless of how the mutation operator is chosen, as long as it is a static choice (i.e., the distribution used for variation of the current individual does not change over time). Here in this work we show that we can achieve a better performance if we allow for adaptive mutation operators. More precisely, we analyze the performance of RLS using a selfadjusting mutation strength. In this algorithm the size of the steps taken in each iteration depends on the success of previous iterations. That is, the mutation strength is increased after a successful iteration and it is decreased otherwise. We show that this idea yields an expected optimization time of \(\Theta(n(\log n + \log r))\), which is optimal among all comparisonbased search heuristics. This is the first time that selfadjusting parameter choices are shown to outperform static choices on a discrete multivalued optimization problem.

Tobias Arndt, Hafner, Danijar, Kellermeier, Thomas, Krogmann, Simon, Razmjou, Armin, Krejca, Martin S., Rothernberger, Ralf, Friedrich, Tobias Probabilistic Routing for OnStreet Parking Search. European Symposium on Algorithms (ESA) 2016: 6:16:13
An estimated \(30\%\) of urban traffic is caused by search for parking spots. Traffic could be reduced by suggesting effective routes leading along potential parking spots. In this paper, we formalize parking search as a probabilistic problem on a road graph and show that it is NPcomplete. We explore heuristics that optimize for the driving duration and the walking distance to the destination. Routes are constrained to reach a certain probability threshold of finding a spot. Empirically estimated probabilities of successful parking attempts are provided by TomTom on a perstreet basis. We release these probabilities as a dataset of about 80,000 roads covering the Berlin area. This allows to evaluate parking search algorithms on a real road network with realistic probabilities for the first time. However, for many other areas, parking probabilities are not openly available. Because they are effortful to collect, we propose an algorithm that relies on conventional road attributes only. Our experiments show that this algorithm comes close to the baseline by a factor of 1.3 in our cost measure. This leads to the conclusion that conventional road attributes may be sufficient to compute reasonably good parking search routes.

Thomas Bläsius, Lehmann, Sebastian, Rutter, Ignaz Orthogonal Graph Drawing with Inflexible Edges. Computational Geometry 2016: 2640
We consider the problem of creating plane orthogonal drawings of 4planar graphs (planar graphs with maximum degree 4) with constraints on the number of bends per edge. More precisely, we have a flexibility function assigning to each edge \(e\) a natural number \(flex(e)\), its flexibility. The problem FlexDraw asks whether there exists an orthogonal drawing such that each edge \(e\) has at most \(flex(e)\) bends. It is known that FlexDraw is NPhard if \(flex(e)=0\) for every edge \(e\). On the other hand, FlexDraw can be solved efficiently if \(flex(e) \ge 1\) and is trivial if \(flex(e) \ge 2\) for every edge \(e\). To close the gap between the NPhardness for \(flex(e)=0\) and the efficient algorithm for \(flex(e) \ge 1\), we investigate the computational complexity of FlexDraw in case only few edges are inflexible (i.e., have flexibility 0). We show that for any \(\epsilon > 0\) FlexDraw is NPcomplete for instances with \(O(n^\epsilon)\) inflexible edges with pairwise distance \(\Omega(n^1\epsilon)\) (including the case where they induce a matching). On the other hand, we give an FPTalgorithm with running time \(O(2^k cdot n cdot T_flow(n))\), where \(T_flow(n)\) is the time necessary to compute a maximum flow in a planar flow network with multiple sources and sinks, and \(k\) is the number of inflexible edges having at least one endpoint of degree 4.

Thomas Bläsius, Rutter, Ignaz, Wagner, Dorothea Optimal Orthogonal Graph Drawing with Convex Bend Costs. Transactions on Algorithms 2016: 33
Traditionally, the quality of orthogonal planar drawings is quantified by the total number off bends or the maximum number of bends per edge. However, this neglects that, in typical applications, edges have varying importance. We consider the problem OptimalFlexDraw that is defined as follows. Given a planar graph \(G\) on \(n\) vertices with maximum degree 4 (4planar graph) and for each edge \(e\) a cost function \(cost_e \colon N_0 \rightarrow R\) defining costs depending on the number of bends \(e\) has, compute a planar orthogonal drawing of \(G\) of minimum cost. In this generality OptimalFlexDraw is NPhard. We show that it can be solved efficiently if (1) the cost function of each edge is convex and (2) the first bend on each edge does not cause any cost. Our algorithm takes time \(O(n, cdot, T_flow(n)\) and \(O(n^2, cdot, T_flow(n))\) for biconnected and connected graphs, respectively, where \(T_flow(n)\) denotes the time to compute a minimumcost flow in a planar network with multiple sources and sinks. Our result is the first polynomialtime bendoptimization algorithm for general 4planar graphs optimizing over all embeddings. Previous work considers restricted graph classes and unit costs.

Sanjay Jain, Kötzing, Timo, Ma, Junqi, Stephan, Frank On the Role of Update Constraints and TextTypes in Iterative Learning. Information and Computation 2016: 152168
The present work investigates the relationship of iterative learning with other learning criteria such as decisiveness, caution, reliability, nonUshapedness, monotonicity, strong monotonicity and conservativeness. Building on the result of Case and Moelius that iterative learners can be made nonUshaped, we show that they also can be made cautious and decisive. Furthermore, we obtain various special results with respect to oneone texts, fat texts and oneone hypothesis spaces.

Tobias Friedrich, Kötzing, Timo, Sutton, Andrew M. On the Robustness of Evolving Populations. Parallel Problem Solving From Nature (PPSN) 2016: 771781
Most theoretical work that studies the benefit of recombination focuses on the ability of crossover to speed up optimization time on specific search problems. In this paper, we take a slightly different perspective and investigate recombination in the context of evolving solutions that exhibit \(\emphmutational\) robustness, i.e., they display insensitivity to small perturbations. Various models in population genetics have demonstrated that increasing the effective recombination rate promotes the evolution of robustness. We show this result also holds in the context of evolutionary computation by proving crossover promotes the evolution of robust solutions in the standard \((\mu+1)\) GA. Surprisingly, our results show that the effect is present even when robust solutions are at a selective disadvantage due to lower fitness values.

Ankit Chauhan, Lenzner, Pascal, Melnichenko, Anna, Münn, Martin On Selfish Creation of Robust Networks. Symposium on Algorithmic Game Theory (SAGT) 2016: 141152
Robustness is one of the key properties of nowadays networks. However, robustness cannot be simply enforced by design or regulation since many important networks, most prominently the Internet, are not created and controlled by a central authority. Instead, Internetlike networks emerge from strategic decisions of many selfish agents. Interestingly, although lacking a coordinating authority, such naturally grown networks are surprisingly robust while at the same time having desirable properties like a small diameter. To investigate this phenomenon we present the first simple model for selfish network creation which explicitly incorporates agents striving for a central position in the network while at the same time protecting themselves against random edgefailure. We show that networks in our model are diverse and we prove the versatility of our model by adapting various properties and techniques from the nonrobust versions which we then use for establishing bounds on the Price of Anarchy. Moreover, we analyze the computational hardness of finding best possible strategies and investigate the game dynamics of our model.

Thomas Bläsius, Friedrich, Tobias, Krohmer, Anton Hyperbolic Random Graphs: Separators and Treewidth. European Symposium on Algorithms (ESA) 2016: 15:115:16
When designing and analyzing algorithms, one can obtain better and more realistic results for practical instances by assuming a certain probability distribution on the input. The worstcase runtime is then replaced by the expected runtime or by bounds that hold with high probability (whp), i.e., with probability \(1  O(1/n)\), on the random input. Hyperbolic random graphs can be used to model complex realworld networks as they share many important properties such as a small diameter, a large clustering coefficient, and a powerlaw degreedistribution. Divide and conquer is an important algorithmic design principle that works particularly well if the instance admits small separators. We show that hyperbolic random graphs in fact have comparatively small separators. More precisely, we show that a hyperbolic random graph can be expected to have a balanced separator hierarchy with separators of size \(O(\sqrtn^(3\beta))\), \(O(\log n)\), and \(O(1)\) if \(2 < \beta < 3\), \(\beta = 3\) and \(3 < \beta\), respectively (\(\beta\) is the powerlaw exponent). We infer that these graphs have whp a treewidth of \(O(\sqrtn^(3  \beta))\), \(O(\log^2n)\), and \(O(\log n)\), respectively. For \(2 < \beta < 3\), this matches a known lower bound. For the more realistic (but harder to analyze) binomial model, we still prove a sublinear bound on the treewidth. To demonstrate the usefulness of our results, we apply them to obtain fast matching algorithms and an approximation scheme for Independent Set.

Ankit Chauhan, Friedrich, Tobias, Rothenberger, Ralf Greed is Good for Deterministic ScaleFree Networks. Foundations of Software Technology and Theoretical Computer Science (FSTTCS) 2016: 33:133:15
Large realworld networks typically follow a powerlaw degree distribution. To study such networks, numerous random graph models have been proposed. However, realworld networks are not drawn at random. Therefore, Brach, Cygan, Lacki, and Sankowski [SODA 2016] introduced two natural deterministic conditions: (1) a powerlaw upper bound on the degree distribution (PLBU) and (2) powerlaw neighborhoods, that is, the degree distribution of neighbors of each vertex is also upper bounded by a power law (PLBN). They showed that many realworld networks satisfy both deterministic properties and exploit them to design faster algorithms for a number of classical graph problems. We complement the work of Brach et al. by showing that some wellstudied random graph models exhibit both the mentioned PLB properties and additionally also a powerlaw lower bound on the degree distribution (PLBL). All three properties hold with high probability for ChungLu Random Graphs and Geometric Inhomogeneous Random Graphs and almost surely for Hyperbolic Random Graphs. As a consequence, all results of Brach et al. also hold with high probability or almost surely for those random graph classes. In the second part of this work we study three classical NPhard combinatorial optimization problems on PLB networks. It is known that on general graphs with maximum degree \(\Delta\), a greedy algorithm, which chooses nodes in the order of their degree, only achieves a \(\Omega(\ln \Delta)\)approximation for Minimum Vertex Cover and Minimum Dominating Set, and a \(\Omega(\Delta)\)approximation for Maximum Independent Set. We prove that the PLBU property suffices for the greedy approach to achieve a constantfactor approximation for all three problems. We also show that all three combinatorial optimization problems are APXcomplete even if all PLBproperties holds hence, PTAS cannot be expected unless P=NP.

Tobias Friedrich, Kötzing, Timo, Krejca, Martin S., Sutton, Andrew M. Graceful Scaling on Uniform versus SteepTailed Noise. Parallel Problem Solving From Nature (PPSN) 2016: 761770
Recently, different evolutionary algorithms (EAs) have been analyzed in noisy environments. The most frequently used noise model for this was additive posterior noise (noise added after the fitness evaluation) taken from a Gaussian distribution. In particular, for this setting it was shown that the \((\mu + 1)\)EA on OneMax does not scale gracefully (higher noise cannot efficiently be compensated by higher \(\mu\)). In this paper we want to understand whether there is anything special about the Gaussian distribution which makes the \((\mu + 1)\)EA not scale gracefully. We keep the setting of posterior noise, but we look at other distributions. We see that for exponential tails the \((\mu + 1)\)EA on OneMax does also not scale gracefully, for similar reasons as in the case of Gaussian noise. On the other hand, for uniform distributions (as well as other, similar distributions) we see that the \((\mu + 1)\)EA on OneMax does scale gracefully, indicating the importance of the noise model.

Wanru Gao, Friedrich, Tobias, Neumann, Frank FixedParameter Single Objective Search Heuristics for Minimum Vertex Cover. Parallel Problem Solving From Nature (PPSN) 2016: 740750
We consider how wellknown branching approaches for the classical minimum vertex cover problem can be turned into randomized initialization strategies with provable performance guarantees and investigate them by experimental investigations. Furthermore, we show how these techniques can be built into local search components and analyze a basic local search variant that is similar to a stateoftheart approach called NuMVC. Our experimental results for the two local search approaches show that making use of more complex branching strategies in the local search component can lead to better results on various benchmark graphs.

Tobias Friedrich, Kötzing, Timo, Krejca, Martin S., Nallaperuma, Samadhi, Neumann, Frank, Schirneck, Martin Fast Building Block Assembly by Majority Vote Crossover. Genetic and Evolutionary Computation Conference (GECCO) 2016: 661668
Different works have shown how crossover can help with building block assembly. Typically, crossover might get lucky to select good building blocks from each parent, but these lucky choices are usually rare. In this work we consider a crossover operator which works on three parent individuals. In each component, the offspring inherits the value present in the majority of the parents; thus, we call this crossover operator majority vote. We show that, if good components are sufficiently prevalent in the individuals, majority vote creates an optimal individual with high probability. Furthermore, we show that this process can be amplified: as long as components are good independently and with probability at least \(1/2+\delta\), we require only \(O(\log 1/\delta + \log \log n)\) successive stages of majority vote to create an optimal individual with high probability! We show how this applies in two scenarios. The first scenario is the Jump test function. With sufficient diversity, we get an optimization time of \(O(n \log n)\) even for jump sizes as large as \(O(n^(1/2\epsilon))\). Our second scenario is a family of vertex cover instances. Majority vote optimizes this family efficiently, while local searches fail and only highly specialized twoparent crossovers are successful.

DucCuong Dang, Friedrich, Tobias, Krejca, Martin S., Kötzing, Timo, Lehre, Per Kristian, Oliveto, Pietro S., Sudholt, Dirk, Sutton, Andrew Michael Escaping Local Optima with Diversity Mechanisms and Crossover. Genetic and Evolutionary Computation Conference (GECCO) 2016: 645652
Population diversity is essential for the effective use of any crossover operator. We compare seven commonly used diversity mechanisms and prove rigorous run time bounds for the \((\mu+1)\) GA using uniform crossover on the fitness function \(Jump_k\). All previous results in this context only hold for unrealistically low crossover probability \(p_c=O(k/n)\), while we give analyses for the setting of constant \(p_c < 1\) in all but one case. Our bounds show a dependence on the problem size \(n\), the jump length \(k\), the population size \(\mu\), and the crossover probability \(p_c\). For the typical case of constant \(k > 2\) and constant \(p_c\), we can compare the resulting expected optimisation times for different diversity mechanisms assuming an optimal choice of \(\mu\): \(O(n^k1)\) for duplicate elimination/minimisation, \(O(n^2 \log n)\) for maximising the convex hull, \(O(n \log n)\) for det. crowding (assuming \(p_c = k/n\)), \(O(n \log n)\) for maximising the Hamming distance, \(O(n \log n)\) for fitness sharing, \(O(n \log n)\) for the singlereceiver island model. This proves a sizeable advantage of all variants of the \((\mu+1)\) GA compared to the (1+1) EA, which requires \(\Theta(n^k)\). In a short empirical study we confirm that the asymptotic differences can also be observed experimentally.

Sanjay Jain, Kötzing, Timo, Stephan, Frank Enlarging learnable classes. Information and Computation 2016: 194207
We study which classes of recursive functions satisfy that their union with any other explanatorily learnable class of recursive functions is again explanatorily learnable. We provide sufficient criteria for classes of recursive functions to satisfy this property and also investigate its effective variants. Furthermore, we study the question which learners can be effectively extended to learn a larger class of functions. We solve an open problem by showing that there is no effective procedure which does this task on all learners which do not learn a dense class of recursive functions. However, we show that there are two effective extension procedures such that each learner is extended by one of them.

DucCuong Dang, Lehre, Per Kristian, Friedrich, Tobias, Kötzing, Timo, Krejca, Martin S., Oliveto, Pietro S., Sudholt, Dirk, Sutton, Andrew M. Emergence of Diversity and its Benefits for Crossover in Genetic Algorithms. Parallel Problem Solving From Nature (PPSN) 2016: 890900
Population diversity is essential for avoiding premature convergence in Genetic Algorithms (GAs) and for the effective use of crossover. Yet the dynamics of how diversity emerges in populations are not well understood. We use rigorous runtime analysis to gain insight into population dynamics and GA performance for a standard \((\mu+1)\) GA and the \(Jump_k\) test function. By studying the stochastic process underlying the size of the largest collection of identical genotypes we show that the interplay of crossover followed by mutation may serve as a catalyst leading to a sudden burst of diversity. This leads to improvements of the expected optimisation time of order \(\Omega(n/ \log n)\) compared to mutationonly algorithms like the \((1+1)\) EA.

Thomas Bläsius, Friedrich, Tobias, Krohmer, Anton, Laue, Sören Efficient Embedding of ScaleFree Graphs in the Hyperbolic Plane. European Symposium on Algorithms (ESA) 2016: 16:116:18
Hyperbolic geometry appears to be intrinsic in many large real networks. We construct and implement a new maximum likelihood estimation algorithm that embeds scalefree graphs in the hyperbolic space. All previous approaches of similar embedding algorithms require a runtime of \(\Omega(n^2)\). Our algorithm achieves quasilinear runtime, which makes it the first algorithm that can embed networks with hundreds of thousands of nodes in less than one hour. We demonstrate the performance of our algorithm on artificial and real networks. In all typical metrics like Loglikelihood and greedy routing our algorithm discovers embeddings that are very close to the ground truth.

Tobias Friedrich, Kötzing, Timo, Krejca, Martin S. EDAs cannot be Balanced and Stable. Genetic and Evolutionary Computation Conference (GECCO) 2016: 11391146
Estimation of Distribution Algorithms (EDAs) work by iteratively updating a distribution over the search space with the help of samples from each iteration. Up to now, theoretical analyses of EDAs are scarce and present run time results for specific EDAs. We propose a new framework for EDAs that captures the idea of several known optimizers, including PBIL, UMDA, \(\lambda\)MMASIB, cGA, and \((1,\lambda)\)EA. Our focus is on analyzing two core features of EDAs: a balanced EDA is sensitive to signals in the fitness; a stable EDA remains uncommitted under a biasless fitness function. We prove that no EDA can be both balanced and stable. The LeadingOnes function is a prime example where, at the beginning of the optimization, the fitness function shows no bias for many bits. Since many wellknown EDAs are balanced and thus not stable, they are not wellsuited to optimize LeadingOnes. We give a stable EDA which optimizes LeadingOnes within a time of \(O(n\,\log n)\).

Andreas Göbel, Goldberg, Leslie Ann, Richerby, David Counting Homomorphisms to SquareFree Graphs, Modulo 2. Transactions on Computation Theory 2016: 12:112:29

Timo Kötzing Concentration of First Hitting Times Under Additive Drift. Algorithmica 2016: 490506
Recent advances in drift analysis have given us better and better tools for understanding random processes, including the run time of randomized search heuristics. In the setting of multiplicative drift we do not only have excellent bounds on the expected run time, but also more general results showing the concentration of the run time. In this paper we investigate the setting of additive drift under the assumption of strong concentration of the "step size" of the process. Under sufficiently strong drift towards the goal we show a strong concentration of the hitting time. In contrast to this, we show that in the presence of small drift a Gambler'sRuinlike behavior of the process overrides the influence of the drift. Finally, in the presence of sufficiently strong negative drift the hitting time is superpolynomial with high probability; this corresponds to the socalled negative drift theorem, for which we give new variants.

Tobias Friedrich, Kötzing, Timo, Quinzan, Francesco, Sutton, Andrew M. Ant Colony Optimization Beats Resampling on Noisy Functions. Genetic and Evolutionary Computation Conference (GECCO) 2016: 34
Despite the pervasiveness of noise in realworld optimization, there is little understanding of the interplay between the operators of randomized search heuristics and explicit noisehandling techniques such as statistical resampling. Ant Colony Optimization (ACO) algorithms are claimed to be particularly wellsuited to dynamic and noisy problems, even without explicit noisehandling techniques. In this work, we empirically investigate the tradeoffs between resampling an the noisehandling abilities of ACO algorithms. Our main focus is to locate the point where resampling costs more than it is worth.

Andreas Galanis, Göbel, Andreas, Goldberg, LeslieAnn, Lapinskas, John, Richerby, David Amplifiers for the Moran Process. International Colloquium on Automata, Languages and Programming (ICALP) 2016: 62:162:13
The Moran process, as studied by Lieberman, Hauert and Nowak, is a randomised algorithm modelling the spread of genetic mutations in populations. The algorithm runs on an underlying graph where individuals correspond to vertices. Initially, one vertex (chosen uniformly at random) possesses a mutation, with fitness \(r > 1\). All other individuals have fitness 1. During each step of the algorithm, an individual is chosen with probability proportional to its fitness, and its state (mutant or nonmutant) is passed on to an outneighbour which is chosen uniformly at random. If the underlying graph is strongly connected then the algorithm will eventually reach fixation, in which all individuals are mutants, or extinction, in which no individuals are mutants. An infinite family of directed graphs is said to be strongly amplifying if, for every \(r > 1\), the extinction probability tends to 0 as the number of vertices increases. Strong amplification is a rather surprising property  it means that in such graphs, the fixation probability of a uniformlyplaced initial mutant tends to 1 even though the initial mutant only has a fixed selective advantage of \(r > 1\) (independently of \(n\)). The name "strongly amplifying" comes from the fact that this selective advantage is "amplified". Strong amplifiers have received quite a bit of attention, and Lieberman et al. proposed two potentially stronglyamplifying families  superstars and metafunnels. Heuristic arguments have been published, arguing that there are infinite families of superstars that are strongly amplifying. The same has been claimed for metafunnels. We give the first rigorous proof that there is an infinite family of directed graphs that is strongly amplifying. We call the graphs in the family "megastars". When the algorithm is run on an nvertex graph in this family, starting with a uniformlychosen mutant, the extinction probability is roughly \(n^1/2\) (up to logarithmic factors). We prove that all infinite families of superstars and metafunnels have larger extinction probabilities (as a function of \(n\)). Finally, we prove that our analysis of megastars is fairly tight  there is no infinite family of megastars such that the Moran algorithm gives a smaller extinction probability (up to logarithmic factors). Also, we provide a counterexample which clarifies the literature concerning the isothermal theorem of Lieberman et al. A full version [Galanis/Göbel/Goldberg/Lapinskas/Richerby, Preprint] containing detailed proofs is available at http://arxiv.org/abs/1512.05632. Theoremnumbering here matches the full version.

Thomas Bläsius, Rutter, Ignaz A new perspective on clustered planarity as a combinatorial embedding problem. Theoretical Computer Science 2016: 306315
The clustered planarity problem (cplanarity) asks whether a hierarchically clustered graph admits a planar drawing such that the clusters can be nicely represented by regions. We introduce the cdtree data structure and give a new characterization of cplanarity. It leads to efficient algorithms for cplanarity testing in the following cases. (i) Every cluster and every cocluster (complement of a cluster) has at most two connected components. (ii) Every cluster has at most five outgoing edges. Moreover, the cdtree reveals interesting connections between cplanarity and planarity with constraints on the order of edges around vertices. On one hand, this gives rise to a bunch of new open problems related to cplanarity, on the other hand it provides a new perspective on previous results.

Timo Kötzing, Palenta, Raphaela A map of update constraints in inductive inference. Theoretical Computer Science 2016: 424
We investigate how different learning restrictions reduce learning power and how the different restrictions relate to one another. We give a complete map for nine different restrictions both for the cases of complete information learning and setdriven learning. This completes the picture for these wellstudied delayable learning restrictions. A further insight is gained by different characterizations of conservative learning in terms of variants of cautious learning. Our analyses greatly benefit from general theorems we give, for example showing that learners with exclusively delayable restrictions can always be assumed total.

Tobias Friedrich, Katzmann, Maximilian, Krohmer, Anton Unbounded Discrepancy of Deterministic Random Walks on Grids. International Symposium on Algorithms and Computation (ISAAC) 2015: 212222
Random walks are frequently used in randomized algorithms. We study a derandomized variant of a random walk on graphs, called rotorrouter model. In this model, instead of distributing tokens randomly, each vertex serves its neighbors in a fixed deterministic order. For most setups, both processes behave remarkably similar: Starting with the same initial configuration, the number of tokens in the rotorrouter model deviates only slightly from the expected number of tokens on the corresponding vertex in the random walk model. The maximal difference over all vertices and all times is called single vertex discrepancy. Cooper and Spencer (2006) showed that on \(\mathbbZ^d\) the single vertex discrepancy is only a constant \(c_d\). Other authors also determined the precise value of \(c_d\) for \(d=1,2\). All these results, however, assume that initially all tokens are only placed on one partition of the bipartite graph \(\mathbbZ^d\). We show that this assumption is crucial by proving that otherwise the single vertex discrepancy can become arbitrarily large. For all dimensions \(d \ge 1\) and arbitrary discrepancies \(\ell \ge 0\), we construct configurations that reach a discrepancy of at least \(\ell\).

Benjamin Doerr, Doerr, Carola, Kötzing, Timo Unbiased BlackBox Complexities of Jump Functions. Evolutionary Computation 2015: 641670
We analyze the unbiased blackbox complexity of jump functions with small, medium, and large sizes of the fitness plateau surrounding the optimal solution. Among other results, we show that when the jump size is \((1/2\epsilon)n\), that is, only a small constant fraction of the fitness values is visible, then the unbiased blackbox complexities for arities 3 and higher are of the same order as those for the simple OneMax function. Even for the extreme jump function, in which all but the two fitness values \(n/2\) and \(n\) are blanked out, polynomialtime mutationbased (i.e., unary unbiased) blackbox optimization algorithms exist. This is quite surprising given that for the extreme jump function almost the whole search space (all but a \(\Theta(n^1/2)\) fraction) is a plateau of constant fitness. To prove these results, we introduce new tools for the analysis of unbiased blackbox complexities, for example, selecting the new parent individual not by comparing the fitnesses of the competing search points, but also by taking into account the (empirical) expected fitnesses of their offspring.

Karl Bringmann, Friedrich, Tobias, Hoefer, Martin, Rothenberger, Ralf, Sauerwald, Thomas UltraFast Load Balancing on ScaleFree Networks. International Colloquium on Automata, Languages and Programming (ICALP) 2015: 516527
The performance of large distributed systems crucially depends on efficiently balancing their load. This has motivated a large amount of theoretical research how an imbalanced load vector can be smoothed with local algorithms. For technical reasons, the vast majority of previous work focuses on regular (or almost regular) graphs including symmetric topologies such as grids and hypercubes, and ignores the fact that large networks are often highly heterogenous. We model large scalefree networks by ChungLu random graphs and analyze a simple local algorithm for iterative load balancing. On nnode graphs our distributed algorithm balances the load within \(O((\log~\log~n)^2)\) steps. It does not need to know the exponent \(beta in (2,3)\) of the powerlaw degree distribution or the weights \(w_i\) of the graph model. To the best of our knowledge, this is the first result which shows that loadbalancing can be done in doublelogarithmic time on realistic graph classes.

Tiago Paixão, Badkobeh, Golnaz, Barton, Nick H., Çörüş, Doğan, Dang, DucCuong, Friedrich, Tobias, Lehre, Per Kristian, Sudholt, Dirk, Sutton, Andrew, Trubenová, Barbora Toward a unifying framework for evolutionary processes. Journal of Theoretical Biology 2015: 2843
The theory of population genetics and evolutionary computation have been evolving separately for nearly 30 years. Many results have been independently obtained in both fields and many others are unique to its respective field. We aim to bridge this gap by developing a unifying framework for evolutionary processes that allows both evolutionary algorithms and population genetics models to be cast in the same formal framework. The framework we present here decomposes the evolutionary process into its several components in order to facilitate the identification of similarities between different models. In particular, we propose a classification of evolutionary operators based on the defining properties of the different components. We cast several commonly used operators from both fields into this common framework. Using this, we map different evolutionary and genetic algorithms to different evolutionary regimes and identify candidates with the most potential for the translation of results between the fields. This provides a unified description of evolutionary processes and represents a stepping stone towards new tools and results to both fields.

Tobias Friedrich, Kötzing, Timo, Krejca, Martin S., Sutton, Andrew M. The Benefit of Recombination in Noisy Evolutionary Search. International Symposium of Algorithms and Computation (ISAAC) 2015: 140150
Practical optimization problems frequently include uncertainty about the quality measure, for example due to noisy evaluations. Thus, they do not allow for a straightforward application of traditional optimization techniques. In these settings metaheuristics are a popular choice for deriving good optimization algorithms, most notably evolutionary algorithms which mimic evolution in nature. Empirical evidence suggests that genetic recombination is useful in uncertain environments because it can stabilize a noisy fitness signal. With this paper we want to support this claim with mathematical rigor. The setting we consider is that of noisy optimization. We study a simple noisy fitness function that is derived by adding Gaussian noise to a monotone function. First, we show that a classical evolutionary algorithm that does not employ sexual recombination (the \((\mu+1)\)EA) cannot handle the noise efficiently, regardless of the population size. Then we show that an evolutionary algorithm which does employ sexual recombination (the Compact Genetic Algorithm, short: cGA) can handle the noise using a graceful scaling of the population.

Benjamin Doerr, Doerr, Carola, Kötzing, Timo Solving Problems with Unknown Solution Length at (Almost) No Extra Cost. Genetic and Evolutionary Computation Conference (GECCO) 2015: 831838
Most research in the theory of evolutionary computation assumes that the problem at hand has a fixed problem size. This assumption does not always apply to realworld optimization challenges, where the length of an optimal solution may be unknown a priori. Following up on previous work of Cathabard, Lehre, and Yao [FOGA 2011] we analyze variants of the (1+1) evolutionary algorithm for problems with unknown solution length. For their setting, in which the solution length is sampled from a geometric distribution, we provide mutation rates that yield an expected optimization time that is of the same order as that of the (1+1) EA knowing the solution length. We then show that almost the same run times can be achieved even if no a priori information on the solution length is available. Finally, we provide mutation rates suitable for settings in which neither the solution length nor the positions of the relevant bits are known. Again we obtain almost optimal run times for the OneMax and LeadingOnes test functions, thus solving an open problem from Cathabard et al.

Tobias Friedrich, Wagner, Markus Seeding the initial population of multiobjective evolutionary algorithms: A computational study. Applied Soft Computing 2015: 223230
Most experimental studies initialize the population of evolutionary algorithms with random genotypes. In practice, however, optimizers are typically seeded with good candidate solutions either previously known or created according to some problemspecific method. This seeding has been studied extensively for singleobjective problems. For multiobjective problems, however, very little literature is available on the approaches to seeding and their individual benefits and disadvantages. In this article, we are trying to narrow this gap via a comprehensive computational study on common realvalued test functions. We investigate the effect of two seeding techniques for five algorithms on 48 optimization problems with 2, 3, 4, 6, and 8 objectives. We observe that some functions (e.g., DTLZ4 and the LZ family) benefit significantly from seeding, while others (e.g., WFG) profit less. The advantage of seeding also depends on the examined algorithm.

Tobias Friedrich, Kötzing, Timo, Krejca, Martin S., Sutton, Andrew M. Robustness of Ant Colony Optimization to Noise. Genetic and Evolutionary Computation Conference (GECCO) 2015: 1724
Recently Ant Colony Optimization (ACO) algorithms have been proven to be efficient in uncertain environments, such as noisy or dynamically changing fitness functions. Most of these analyses focus on combinatorial problems, such as path finding. We analyze an ACO algorithm in a setting where we try to optimize the simple OneMax test function, but with additive posterior noise sampled from a Gaussian distribution. Without noise the classical \((\mu+1)\)EA outperforms any ACO algorithm, with smaller \(\mu\) being better; however, with large noise, the \((\mu+1)\)EA fails, even for high values of \(\mu\) (which are known to help against small noise). In this paper we show that ACO is able to deal with arbitrarily large noise in a graceful manner, that is, as long as the evaporation factor \(\mu\) is small enough dependent on the parameter \(\delta^2\) of the noise and the dimension \(n\) of the search space \((p = o(1/(n(n + \delta \log n)^2 \log n)))\), optimization will be successful.

Petra Berenbrink, Cooper, Colin, Friedetzky, Tom, Friedrich, Tobias, Sauerwald, Thomas Randomized diffusion for indivisible loads. Journal of Computer and System Sciences 2015: 159185
We present a new randomized diffusionbased algorithm for balancing indivisible tasks (tokens) on a network. Our aim is to minimize the discrepancy between the maximum and minimum load. The algorithm works as follows. Every vertex distributes its tokens as evenly as possible among its neighbors and itself. If this is not possible without splitting some tokens, the vertex redistributes its excess tokens among all its neighbors randomly (without replacement). In this paper we prove several upper bounds on the load discrepancy for general networks. These bounds depend on some expansion properties of the network, that is, the second largest eigenvalue, and a novel measure which we refer to as refined local divergence. We then apply these general bounds to obtain results for some specific networks. For constantdegree expanders and torus graphs, these yield exponential improvements on the discrepancy bounds compared to the algorithm of Rabani, Sinclair, and Wanka. For hypercubes we obtain a polynomial improvement. In contrast to previous papers, our algorithm is vertexbased and not edgebased. This means excess tokens are assigned to vertices instead to edges, and the vertex reallocates all of its excess tokens by itself. This approach avoids nodes having "negative loads", but causes additional dependencies for the analysis.

Anh Quang Nguyen, Sutton, Andrew M., Neumann, Frank Population size matters: Rigorous runtime results for maximizing the hypervolume indicator. Theoretical Computer Science 2015: 2436
Using the hypervolume indicator to guide the search of evolutionary multiobjective algorithms has become very popular in recent years. We contribute to the theoretical understanding of these algorithms by carrying out rigorous runtime analyses. We consider multiobjective variants of the problems OneMax and LeadingOnes called OMM and LOTZ, respectively, and investigate hypervolumebased algorithms with population sizes that do not allow coverage of the entire Pareto front. Our results show that LOTZ is easier to optimize than OMM for hypervolumebased evolutionary multiobjective algorithms which is contrary to the results on their singleobjective variants and the wellstudied (1+1)EA.

Md. Jawaherul Alam, Bläsius, Thomas, Rutter, Ignaz, Ueckerdt, Torsten, Wolff, Alexander Pixel and Voxel Representations of Graphs. Graph Drawing (GD) 2015: 472486
We study contact representations for graphs, which we call pixel representations in 2D and voxel representations in 3D. Our representations are based on the unit square grid whose cells we call pixels in 2D and voxels in 3D. Two pixels are adjacent if they share an edge, two voxels if they share a face. We call a connected set of pixels or voxels a blob. Given a graph, we represent its vertices by disjoint blobs such that two blobs contain adjacent pixels or voxels if and only if the corresponding vertices are adjacent. We are interested in the size of a representation, which is the number of pixels or voxels it consists of. We first show that finding minimumsize representations is NPcomplete. Then, we bound representation sizes needed for certain graph classes. In 2D, we show that, for \(k\)outerplanar graphs with \(n\) vertices, \(\Theta(kn)\) pixels are always sufficient and sometimes necessary. In particular, outerplanar graphs can be represented with a linear number of pixels, whereas general planar graphs sometimes need a quadratic number. In 3D, \(\Theta(n^2)\) voxels are always sufficient and sometimes necessary for any \(n\)vertex graph. We improve this bound to \(\Theta(n \cdot \tau)\) for graphs of treewidth \(\tau\) and to \(O((g+1)^2 n \log^2 n)\) for graphs of genus \(g\). In particular, planar graphs admit representations with \(O(n\log^2 n)\) voxels.

Tobias Friedrich, Krohmer, Anton Parameterized clique on inhomogeneous random graphs. Discrete Applied Mathematics 2015: 130138
Finding cliques in graphs is a classical problem which is in general NPhard and parameterized intractable. In typical applications like social networks or biological networks, however, the considered graphs are scalefree, i.e., their degree sequence follows a power law. Their specific structure can be algorithmically exploited and makes it possible to solve clique much more efficiently. We prove that on inhomogeneous random graphs with \(n\) nodes and power law exponent \(\beta\), cliques of size \(k\) can be found in time \(O(n)\) for \(\beta \ge 3\) and in time \(O(ne^k^4)\) for \(2 < \beta < 3\).

Ankit Chauhan, Rao, B. V. Raghavendra Parameterized Analogues of Probabilistic Computation. Conference on Algorithms and Discrete Applied Mathematics (CALDAM) 2015: 181192
We study structural aspects of randomized parameterized computation. We introduce a new class W[P]PFPT as a natural parameterized analogue of PP. Our definition uses the machine based characterization of the parameterized complexity class W[P] obtained by Chen et.al [TCS 2005]. We translate most of the structural properties and characterizations of the class PP to the new class W[P]PFPT. We study a parameterization of the polynomial identity testing problem based on the degree of the polynomial computed by the arithmetic circuit. We obtain a parameterized analogue of the well known SchwartzZippel lemma [Schwartz, JACM 80 and Zippel, EUROSAM 79]. Additionally, we introduce a parameterized variant of permanent, and prove its #W[1] completeness.

Thomas Bläsius, Lehmann, Sebastian, Rutter, Ignaz Orthogonal Graph Drawing with Inflexible Edges. Conference on Algorithms and Complexity (CIAC) 2015: 6173
We consider the problem of creating plane orthogonal drawings of 4planar graphs (planar graphs with maximum degree 4) with constraints on the number of bends per edge. More precisely, we have a flexibility function assigning to each edge \(e\) a natural number \(flex(e)\), its flexibility. The problem FlexDraw asks whether there exists an orthogonal drawing such that each edge \(e\) has at most \(flex(e)\) bends. It is known that FlexDraw is NPhard if \(flex(e)=0\) for every edge \(e\) [7]. On the other hand, FlexDraw can be solved efficiently if \(flex(e) \ge1\) [2] and is trivial if \(flex(e) \ge 2\) [1] for every edge \(e\). To close the gap between the NPhardness for \(flex(e)=0\) and the efficient algorithm for \(flex(e) \ge 1\), we investigate the computational complexity of FlexDraw in case only few edges are inflexible (i.e., have flexibility 0). We show that for any \(\epsilon > 0\) FlexDraw is NPcomplete for instances with \(O(n^\epsilon)\) inflexible edges with pairwise distance \(\Omega(n^1\epsilon)\) (including the case where they induce a matching). On the other hand, we give an FPTalgorithm with running time \(O(2^k cdot n cdot T_flow(n))\), where \(T_flow(n)\) is the time necessary to compute a maximum flow in a planar flow network with multiple sources and sinks, and \(k\) is the number of inflexible edges having at least one endpoint of degree 4.

Tobias Friedrich, Hercher, Christian On the kernel size of clique cover reductions for random intersection graphs. Journal of Discrete Algorithms 2015: 128136
Covering all edges of a graph by a minimum number of cliques is a well known NP hard problem. For the parameter \(k\) being the maximal number of cliques to be used, the problem becomes fixed parameter tractable. However, assuming the Exponential Time Hypothesis, there is no kernel of subexponential size in the worstcase. We study the average kernel size for random intersection graphs with \(n\) vertices, edge probability \(p\), and clique covers of size \(k\). We consider the wellknown set of reduction rules of Gramm, Guo, Hüffner, and Niedermeier (2009) and show that with high probability they reduce the graph completely if \(p\) is bounded away from 1 and \(k < c \log n\) for some constant \(c > 0\) . This shows that for large probabilistic graph classes like random intersection graphs the expected kernel size can be substantially smaller than the known exponential worstcase bounds.

Tobias Friedrich, Krohmer, Anton On the Diameter of Hyperbolic Random Graphs. International Colloquium on Automata, Languages and Programming (ICALP) 2015: 614625
Large realworld networks are typically scalefree. Recent research has shown that such graphs are described best in a geometric space. More precisely, the internet can be mapped to a hyperbolic space such that geometric greedy routing performs close to optimal (Boguna, Papadopoulos, and Krioukov. Nature Communications, 1:62, 2010). This observation pushed the interest in hyperbolic networks as a natural model for scalefree networks. Hyperbolic random graphs follow a powerlaw degree distribution with controllable exponent \(\beta\) and show high clustering (Gugelmann, Panagiotou, and Peter. ICALP, pp. 573585, 2012). For understanding the structure of the resulting graphs and for analyzing the behavior of network algorithms, the next question is bounding the size of the diameter. The only known explicit bound is \(O((\log n)^32/((3\beta)(5\beta)))\) (Kiwi and Mitsche. ANALCO, pp. 2639, 2015). We present two much simpler proofs for an improved upper bound of \(O((\log n)^2/(3\beta))\) and a lower bound of \(\Omega(\log n)\).

Nikolaos Fountoulakis, Friedrich, Tobias, Hermelin, Danny On the averagecase complexity of parameterized clique. Theoretical Computer Science 2015: 1829
The \(k\)Clique problem is a fundamental combinatorial problem that plays a prominent role in classical as well as in parameterized complexity theory. It is among the most wellknown NPcomplete and W[1]complete problems. Moreover, its averagecase complexity analysis has created a long thread of research already since the 1970s. Here, we continue this line of research by studying the dependence of the averagecase complexity of the \(k\)Clique problem on the parameter \(k\). To this end, we define two natural parameterized analogs of efficient averagecase algorithms. We then show that \(k\)Clique admits both analogues for ErdHosRényi random graphs of arbitrary density. We also show that \(k\)Clique is unlikely to admit either of these analogs for some specific computable input distribution.

Martin Schirneck On Restrictions in Computational Language Learning. 2015
In 1990 Fulk proved that partially setdrivenness (rearrangementindependence) does not weaken the power of unrestricted computational language learning. The question arises whether this result still holds if paired with various learning restrictions. We investigate the influence of two main categories of such restrictions, namely contentbased and delayable ones. An adaption of Fulk’s theorem is verified for contentbased learning and some delayable restrictions regarding Ushaped learning. On the other hand, we give an example criterion of delayable learning — explanatory learning from text by a strongly monotone scientist — for which partially setdrivenness does reduce the learning power. Additionally, the interdependence of these restrictions with several other interaction operators and success criteria are explored.

Thomas Bläsius New Approaches to Classic GraphEmbedding Problems  Orthogonal Drawings & Constrained Planarity. 2015
Drawings of graphs are often used to represent a given data set in a humanreadable way. In this thesis, we consider different classic algorithmic problems that arise when automatically generating graph drawings. More specifically, we solve some open problems in the context of orthogonal drawings and advance the current state of research on the problems clustered planarity and simultaneous planarity.

Andreas CordLandwehr, Lenzner, Pascal Network Creation Games: Think Global  Act Local. Mathematical Foundations of Computer Science (MFCS) 2015: 248260
We investigate a noncooperative gametheoretic model for the formation of communication networks by selfish agents. Each agent aims for a central position at minimum cost for creating edges. In particular, the general model (Fabrikant et al., PODC'03) became popular for studying the structure of the Internet or social networks. Despite its significance, locality in this game was first studied only recently (Bilo et al., SPAA'14), where a worst case locality model was presented, which came with a high efficiency loss in terms of quality of equilibria. Our main contribution is a new and more optimistic view on locality: agents are limited in their knowledge and actions to their local view ranges, but can probe different strategies and finally choose the best. We study the influence of our locality notion on the hardness of computing best responses, convergence to equilibria, and quality of equilibria. Moreover, we compare the strength of local versus nonlocal strategy changes. Our results address the gap between the original model and the worst case locality variant. On the bright side, our efficiency results are in line with observations from the original model, yet we have a nonconstant lower bound on the Price of Anarchy.

Tobias Friedrich, Neumann, Frank, Thyssen, Christian Multiplicative Approximations, Optimal Hypervolume Distributions, and the Choice of the Reference Point. Evolutionary Computation 2015: 131159
Many optimization problems arising in applications have to consider several objective functions at the same time. Evolutionary algorithms seem to be a very natural choice for dealing with multiobjective problems as the population of such an algorithm can be used to represent the tradeoffs with respect to the given objective functions. In this paper, we contribute to the theoretical understanding of evolutionary algorithms for multiobjective problems. We consider indicatorbased algorithms whose goal is to maximize the hypervolume for a given problem by distributing \(\mu\) points on the Pareto front. To gain new theoretical insights into the behavior of hypervolumebased algorithms we compare their optimization goal to the goal of achieving an optimal multiplicative approximation ratio. Our studies are carried out for different Pareto front shapes of biobjective problems. For the class of linear fronts and a class of convex fronts, we prove that maximizing the hypervolume gives the best possible approximation ratio when assuming that the extreme points have to be included in both distributions of the points on the Pareto front. Furthermore, we investigate the choice of the reference point on the approximation behavior of hypervolumebased approaches and examine Pareto fronts of different shapes by numerical calculations.

Tobias Friedrich, Neumann, Frank Maximizing Submodular Functions under Matroid Constraints by Evolutionary Algorithms. Evolutionary Computation 2015: 543558
Many combinatorial optimization problems have underlying goal functions that are submodular. The classical goal is to find a good solution for a given submodular function \(f\) under a given set of constraints. In this paper, we investigate the runtime of a simple single objective evolutionary algorithm called (1+1) EA and a multiobjective evolutionary algorithm called GSEMO until they have obtained a good approximation for submodular functions. For the case of monotone submodular functions and uniform cardinality constraints we show that the GSEMO achieves a \((11/e)\)approximation in expected polynomial time. For the case of monotone functions where the constraints are given by the intersection of \(k \ge 2\) matroids, we show that the (1+1) EA achieves a \((1 + k/\delta)\)approximation in expected polynomial time for any constant \(\delta > 0\). Turning to nonmonotone symmetric submodular functions with \(k \ge 1\) matroid intersection constraints, we show that the GSEMO achieves a \((1/((k+2)(1+\epsilon)))\)approximation in expected time \(O(n^k+6 \log(n)/\epsilon)\).

Benjamin Doerr, Neumann, Frank, Sutton, Andrew M. Improved Runtime Bounds for the (1+1) EA on Random 3CNF Formulas Based on FitnessDistance Correlation. Genetic and Evolutionary Computation Conference (GECCO) 2015: 14151422
With this paper, we contribute to the theoretical understanding of randomized search heuristics by investigating their behavior on random 3SAT instances. We improve the results for the (1+1) EA obtained by Sutton and Neumann [PPSN 2014, 942951] in three ways. First, we reduce the upper bound by a linear factor and prove that the (1+1) EA obtains optimal solutions in time \(O(n \log n)\) with high probability on asymptotically almost all highdensity satisfiable 3CNF formulas. Second, we extend the range of densities for which this bound holds to satisfiable formulas of at least logarithmic density. Finally, we complement these mathematical results with numerical experiments that summarize the behavior of the (1+1) EA on formulas along the density spectrum, and suggest that the implicit constants hidden in our bounds are low. Our proofs are based on analyzing the run of the algorithm by establishing a fitnessdistance correlation. This approach might be of independent interest and we are optimistic that it is useful for the analysis of randomized search heuristics in various other settings. To our knowledge, this is the first time that fitnessdistance correlation is explicitly used to rigorously prove a performance statement for an evolutionary algorithm.

Tobias Friedrich, He, Jun, Jansen, Thomas, Moraglio, Alberto Genetic and Evolutionary Computation. Theoretical Computer Science 2015: 12

Francisco Chicano, Sutton, Andrew M., Whitley, L. Darrell, Alba, Enrique Fitness Probability Distribution of BitFlip Mutation. Evolutionary Computation 2015: 217248
Bitflip mutation is a common mutation operator for evolutionary algorithms applied to optimize functions over binary strings. In this paper, we develop results from the theory of landscapes and Krawtchouk polynomials to exactly compute the probability distribution of fitness values of a binary string undergoing uniform bitflip mutation. We prove that this probability distribution can be expressed as a polynomial in \(p\), the probability of flipping each bit. We analyze these polynomials and provide closedform expressions for an easy linear problem Onemax, and an NPhard problem, MAXSAT. We also discuss a connection of the results with runtime analysis.

Dominik D. Freydenberger, Kötzing, Timo Fast Learning of Restricted Regular Expressions and DTDs. Theory of Computing Systems 2015: 11141158
We study the problem of generalizing from a finite sample to a language taken from a predefined language class. The two language classes we consider are subsets of the regular languages and have significance in the specification of XML documents (the classes corresponding to so called chain regular expressions, Chares, and to single occurrence regular expressions, Sores). The previous literature gave a number of algorithms for generalizing to Sores providing a trade off between quality of the solution and speed. Furthermore, a fast but nonoptimal algorithm for generalizing to Chares is known. For each of the two language classes we give an efficient algorithm returning a minimal generalization from the given finite sample to an element of the fixed language class; such generalizations are called descriptive. In this sense, both our algorithms are optimal.

Markus Wagner, Bringmann, Karl, Friedrich, Tobias, Neumann, Frank Efficient optimization of many objectives by approximationguided evolution. European Journal of Operational Research 2015: 465479
Multiobjective optimization problems arise frequently in applications, but can often only be solved approximately by heuristic approaches. Evolutionary algorithms have been widely used to tackle multiobjective problems. These algorithms use different measures to ensure diversity in the objective space but are not guided by a formal notion of approximation. We present a framework for evolutionary multiobjective optimization that allows to work with a formal notion of approximation. This approximationguided evolutionary algorithm (AGE) has a worstcase runtime linear in the number of objectives and works with an archive that is an approximation of the nondominated objective vectors seen during the run of the algorithm. Our experimental results show that AGE finds competitive or better solutions not only regarding the achieved approximation, but also regarding the total hypervolume. For all considered test problems, even for many (i.e., more than ten) dimensions, AGE discovers a good approximation of the Pareto front. This is not the case for established algorithms such as NSGAII, SPEA2, and SMSEMOA. In this paper we compare AGE with two additional algorithms that use very fast hypervolumeapproximations to guide their search. This significantly speeds up the runtime of the hypervolumebased algorithms, which now allows a comparison of the underlying selection schemes.

Karl Bringmann, Friedrich, Tobias, Klitzke, Patrick Efficient computation of twodimensional solution sets maximizing the epsilonindicator. Congress on Evolutionary Computation (CEC) 2015: 970977
The majority of empirical comparisons of multiobjective evolutionary algorithms (MOEAs) are performed on synthetic benchmark functions. One of the advantages of synthetic test functions is the apriori knowledge of the optimal Pareto front. This allows measuring the proximity to the optimal front for the solution sets returned by the different MOEAs. Such a comparison is only meaningful if the cardinality of all solution sets is bounded by some fixed \(k\). In order to compare MOEAs to the theoretical optimum achievable with \(k\) solutions, we determine best possible \(\epsilon\)indicator values achievable with solution sets of size \(k\), up to an error of \(\delta\). We present a new algorithm with runtime \(O(k cdot \log^2(\delta1))\), which is an exponential improvement regarding the dependence on the error \(\delta\) compared to all previous work. We show mathematical correctness of our algorithm and determine optimal solution sets for sets of cardinality \(k \in \2, 3, 4, 5, 10, 20, 50, 100, 1000\\) for the well known test suits DTLZ, ZDT, WFG and LZ09 up to error \(\delta = 10^25\).

Ralf Rothenberger, Grau, Sascha, Rossberg, Michael Dominating an stCut in a Network. Conference on Current Trends in Theory and Practice of Informatics (SOFSEM) 2015: 401411
We study an optimization problem with applications in design and analysis of resilient communication networks: given two vertices \(s, t\) in a graph \(G = (V,E)\), find a vertex set \(X \subset V\) of minimum cardinality, such that \(X\) and its neighborhood constitute an \(st\) vertex separator. Although the problem naturally combines notions of graph connectivity and domination, its computational properties significantly differ from these relatives. In particular, we show that on general graphs the problem cannot be approximated to within a factor of \(2^\log^1\deltan\), with \(\delta = 1 / \log\log^cn\) and arbitrary \(c<1/2\) (if P \(\neq\) NP). This inapproximability result even applies if the subgraph induced by a solution set has the additional constraint of being connected. Furthermore, we give a \(2\sqrtn\)approximation algorithm and study the problem on graphs with bounded node degree. With \(\Delta\) being the maximum degree of nodes \(V \setminus \s,t\\), we identify a \((\Delta + 1)\) approximation algorithm.

Thomas Bläsius, Rutter, Ignaz Disconnectivity and relative positions in simultaneous embeddings. Computational Geometry 2015: 459478
For two planar graphs \(G^1 = ( V^1, E^1)\) and \(G^2 = ( V^2, E^2)\) sharing a common subgraph \(G = G^1 \cap G^2\) the problem Simultaneous Embedding with Fixed Edges (SEFE) asks whether they admit planar drawings such that the common graph is drawn the same. Previous algorithms only work for cases where \(G\) is connected, and hence do not need to handle relative positions of connected components. We consider the problem where \(G\), \(G^1\) and \(G^2\) are not necessarily connected.First, we show that a general instance of SEFE can be reduced in linear time to an equivalent instance where \(V^1 = V^2\) and \(G^1\) and \(G^2\) are connected. Second, for the case where \(G\) consists of disjoint cycles, we introduce the CCtree which represents all embeddings of \(G\) that extend to planar embeddings of \(G^1\). We show that CCtrees can be computed in linear time, and that their intersection is again a CCtree. This yields a lineartime algorithm for SEFE if all \(k\) input graphs (possibly \(k > 2\)) pairwise share the same set of disjoint cycles. These results, including the CCtree, extend to the case where \(G\) consists of arbitrary connected components, each with a fixed planar embedding on the sphere. Then the running time is \(O(n^2)\).

Andreas Göbel, Goldberg, Leslie Ann, McQuillan, Colin, Richerby, David, Yamakami, Tomoyuki Counting List Matrix Partitions of Graphs. SIAM Journal on Computing 2015: 10891118
Given a symmetric \(D\times D\) matrix \(M\) over \(0,1,*\), a list \(M\)partition of a graph \(G\) is a partition of the vertices of \(G\) into \(D\) parts which are associated with the rows of \(M\). The part of each vertex is chosen from a given list in such a way that no edge of \(G\) is mapped to a 0 in \(M\) and no nonedge of \(G\) is mapped to a 1 in \(M\). Many important graphtheoretic structures can be represented as list \(M\)partitions including graph colorings, split graphs, and homogeneous sets and pairs, which arise in the proofs of the weak and strong perfect graph conjectures. Thus, there has been quite a bit of work on determining for which matrices \(M\) computations involving list \(M\)partitions are tractable. This paper focuses on the problem of counting list \(M\)partitions, given a graph \(G\) and given a list for each vertex of \(G\). We identify a certain set of "tractable" matrices \(M\). We give an algorithm that counts list \(M\)partitions in polynomial time for every (fixed) matrix \(M\) in this set. The algorithm relies on data structures such as sparsedense partitions and subcube decompositions to reduce each problem instance to a sequence of problem instances in which the lists have a certain useful structure that restricts access to portions of \(M\) in which the interactions of 0s and 1s are controlled. We show how to solve the resulting restricted instances by converting them into particular counting constraint satisfaction problems (\(\#CSP\)s), which we show how to solve using a constraint satisfaction technique known as arcconsistency. For every matrix \(M\) for which our algorithm fails, we show that the problem of counting list \(M\)partitions is \(\#P\)complete. Furthermore, we give an explicit characterization of the dichotomy theorem: counting list \(M\)partitions is tractable (in \(FP\)) if the matrix \(M\) has a structure called a derectangularizing sequence. If \(M\) has no derectangularizing sequence, we show that counting list \(M\)partitions is \(\#P\)hard. We show that the metaproblem of determining whether a given matrix has a derectangularizing sequence is \(NP\)complete. Finally, we show that list \(M\)partitions can be used to encode cardinality restrictions in \(M\)partitions problems, and we use this to give a polynomialtime algorithm for counting homogeneous pairs in graphs.

Andreas Göbel, Goldberg, Leslie Ann, Richerby, David Counting Homomorphisms to SquareFree Graphs, Modulo 2. International Colloquium on Automata, Languages, and Programming (ICALP) 2015: 642653
We study the problem \( \oplus \)HomsToH of counting, modulo 2, the homomorphisms from an input graph to a fixed undirected graph \(H\). A characteristic feature of modular counting is that cancellations make wider classes of instances tractable than is the case for exact (nonmodular) counting; thus, subtle dichotomy theorems can arise. We show the following dichotomy: for any \(H\) that contains no 4cycles, \(\oplus\)HomsToH is either in polynomial time or is \(\oplus\)Pcomplete. This partially confirms a conjecture of Faben and Jerrum that was previously only known to hold for trees and for a restricted class of treewidth2 graphs called cactus graphs. We confirm the conjecture for a rich class of graphs, including graphs of unbounded treewidth. In particular, we focus on squarefree graphs, which are graphs without 4cycles. These graphs arise frequently in combinatorics, for example, in connection with the strong perfect graph theorem and in certain graph algorithms. Previous dichotomy theorems required the graph to be treelike so that treelike decompositions could be exploited in the proof. We prove the conjecture for a much richer class of graphs by adopting a much more general approach.

Tobias Friedrich, Krohmer, Anton Cliques in hyperbolic random graphs. International Conference on Computer Communications (INFOCOM) 2015: 15441552
Most complex realworld networks display scalefree features. This motivated the study of numerous random graph models with a powerlaw degree distribution. There is, however, no established and simple model which also has a high clustering of vertices as typically observed in real data. Hyperbolic random graphs bridge this gap. This natural model has recently been introduced by Papadopoulos, Krioukov, Boguna, Vahdat (INFOCOM, pp. 29732981, 2010) and has shown theoretically and empirically to fulfill all typical properties of realworld networks, including powerlaw degree distribution and high clustering. We study cliques in hyperbolic random graphs \(G\) and present new results on the expected number of \(k\)cliques \(E[K_k]\) and the size of the largest clique \(\omega(G)\). We observe that there is a phase transition at powerlaw exponent \(\gamma = 3\). More precisely, for \(gamma in (2,3)\) we prove \(E[K_k] = n^k(3\gamma)/2 \Theta(k)^k\) and \(\omega(G) = \Theta(n^(3\gamma)/2)\) while for \(\gamma \ge 3\) we prove \(E[K_k] = n \Theta(k)^k\) and \(\omega(G) = \Theta(\log(n)/\log \log n)\). We empirically compare the \(\omega(G)\) value of several scalefree random graph models with realworld networks. Our experiments show that the \(\omega(G)\)predictions by hyperbolic random graphs are much closer to the data than other scalefree random graph models.

Timo Kötzing, Lissovoi, Andrei, Witt, Carsten (1+1) EA on Generalized Dynamic OneMax. Foundations of Genetic Algorithms (FOGA) 2015: 4051
Evolutionary algorithms (EAs) perform well in settings involving uncertainty, including settings with stochastic or dynamic fitness functions. In this paper, we analyze the (1+1) EA on dynamically changing OneMax, as introduced by Droste (2003). We reprove the known results on first hitting times using the modern tool of drift analysis. We extend these results to search spaces which allow for more than two values per dimension. Furthermore, we make an anytime analysis as suggested by Jansen and Zarges (2014), analyzing how closely the (1+1) EA can track the dynamically moving optimum over time. We get tight bounds both for the case of bit strings, as well as for the case of more than two values per position. Surprisingly, in the latter setting, the expected quality of the search point maintained by the (1+1) EA does not depend on the number of values per dimension.

Benjamin Doerr, Doerr, Carola, Kötzing, Timo Unbiased blackbox complexities of jump functions: how to cross large plateaus. Genetic and Evolutionary Computation Conference (GECCO) 2014: 769776
We analyze the unbiased blackbox complexity of jump functions with large jump sizes. Among other results, we show that when the jump size is \((1/2  \epsilon)n\), that is, only a small constant fraction of the fitness values is visible, then the unbiased blackbox complexities for arities 3 and higher are of the same order as those for the simple OneMax function. Even for the extreme jump function, in which all but the two fitness values \(n/2\) and \(n\) are blanked out, polynomialtime mutationbased (i.e., unary unbiased) blackbox optimization algorithms exist. This is quite surprising given that for the extreme jump function almost the whole search space (all but a \(\Theta(n^1/2)\) fraction) is a plateau of constant fitness.To prove these results, we introduce new tools for the analysis of unbiased blackbox complexities, for example, selecting the new parent individual not by comparing the fitnesses of the competing search points, but also by taking into account the (empirical) expected fitnesses of their offspring.

Karl Bringmann, Friedrich, Tobias, Klitzke, Patrick Twodimensional subset selection for hypervolume and epsilonindicator. Genetic and Evolutionary Computation Conference (GECCO) 2014: 589596
The goal of biobjective optimization is to find a small set of good compromise solutions. A common problem for biobjective evolutionary algorithms is the following subset selection problem (SSP): Given \(n\) solutions \(P \subset \mathbbR^2\) in the objective space, select \(k\) solutions \(P^*\) from \(P\) that optimize an indicator function. In the hypervolume SSP we want to select \(k\) points \(P^*\) that maximize the hypervolume indicator \(I_HYP(P^*, r)\) for some reference point \(r \in R^2\). Similarly, the \(\epsilon\)indicator SSP aims at selecting \(\tilde k\) points \(P^*\) that minimize the \(\epsilon\)indicator \(I_\epsilon(P^*,\mathbbR)\) for some reference set \(R \subset R^2\) of size \(m\) (which can be \(\mathbbR=P\)). We first present a new algorithm for the hypervolume SSP with runtime \(O(n (k + \log n))\). Our second main result is a new algorithm for the \(\epsilon\)indicator SSP with runtime \(O(n \log n + m \log m)\). Both results improve the current state of the art runtimes by a factor of (nearly) \(n\) and make the problems tractable for new applications. Preliminary experiments confirm that the theoretical results translate into substantial empirical runtime improvements.

Benjamin Doerr, Doerr, Carola, Kötzing, Timo The unbiased blackbox complexity of partition is polynomial. Artificial Intelligence 2014: 275286
Unbiased blackbox complexity was introduced as a refined complexity model for randomized search heuristics (Lehre and Witt (2012) [24]). For several problems, this notion avoids the unrealistically low complexity results given by the classical model of Droste et al. (2006) [10]. We show that for some problems the unbiased blackbox complexity remains artificially small. More precisely, for two different formulations of an NP hard subclass of the wellknown Partition problem, we give mutationonly unbiased blackbox algorithms having complexity \(O(n \log n)\). This indicates that also the unary unbiased blackbox complexity does not give a complete picture of the true difficulty of this problem for randomized search heuristics.

Timo Kötzing, Sutton, Andrew M., Neumann, Frank, O'Reilly, UnaMay The Max problem revisited: The importance of mutation in genetic programming. Theoretical Computer Science 2014: 94107
This paper contributes to the rigorous understanding of genetic programming algorithms by providing runtime complexity analyses of the wellstudied Max problem. Several experimental studies have indicated that it is hard to solve the Max problem with crossoverbased algorithms. Our analyses show that different variants of the Max problem can provably be solved using simple mutationbased genetic programming algorithms. Our results advance the body of computational complexity analyses of genetic programming, indicate the importance of mutation in genetic programming, and reveal new insights into the behavior of mutationbased genetic programming algorithms.

Darrell Whitley, Sutton, Andrew M., Ochoa, Gabriela, Chicano, Francisco The component model for elementary landscapes and partial neighborhoods. Theoretical Computer Science 2014: 5975
Local search algorithms exploit moves on an adjacency graph of the search space. An "elementary landscape" exists if the objective function \(f\) is an eigenfunction of the Laplacian of the graph induced by the neighborhood operator; this allows various statistics about the neighborhood to be computed in closed form. A new component based model makes it relatively simple to prove that certain types of landscapes are elementary. The traveling salesperson problem, weighted graph (vertex) coloring and the minimum graph bisection problem yield elementary landscapes under commonly used local search operators. The component model is then used to efficiently compute the mean objective function value over partial neighborhoods for these same problems. For a traveling salesperson problem over \(n\) cities, the 2opt neighborhood can be decomposed into \(\lfloor n/2  1\rfloor\) partial neighborhoods. For graph coloring and the minimum graph bisection problem, partial neighborhoods can be used to focus search on those moves that are capable of producing a solution with a strictly improving objective function value.

Andreas Göbel, Goldberg, Leslie Ann, Richerby, David The complexity of counting homomorphisms to cactus graphs modulo 2. Transactions on Computation Theory 2014: 17:117:29
A homomorphism from a graph \(G\) to a graph \(H\) is a function from \(V(G)\) to \(V(H)\) that preserves edges. Many combinatorial structures that arise in mathematics and in computer science can be represented naturally as graph homomorphisms and as weighted sums of graph homomorphisms. In this article, we study the complexity of counting homomorphisms modulo 2. The complexity of modular counting was introduced by Papadimitriou and Zachos and it has been pioneered by Valiant who famously introduced a problem for which counting modulo 7 is easy but counting modulo 2 is intractable. Modular counting provides a rich setting in which to study the structure of homomorphism problems. In this case, the structure of the graph \(H\) has a big influence on the complexity of the problem. Thus, our approach is graphtheoretic. We give a complete solution for the class of cactus graphs, which are connected graphs in which every edge belongs to at most one cycle. Cactus graphs arise in many applications such as the modelling of wireless sensor networks and the comparison of genomes. We show that, for some cactus graphs \(H\), counting homomorphisms to \(H\) modulo 2 can be done in polynomial time. For every other fixed cactus graph \(H\), the problem is complete in the complexity class \(\oplus P\), which is a wide complexity class to which every problem in the polynomial hierarchy can be reduced (using randomised reductions). Determining which \(H\) lead to tractable problems can be done in polynomial time. Our result builds upon the work of Faben and Jerrum, who gave a dichotomy for the case in which \(H\) is a tree.

Patrizio Angelini, Bläsius, Thomas, Rutter, Ignaz Testing Mutual duality of Planar graphs. Computational Geometry and Applications 2014: 325346
We introduce and study the problem Mutual Planar Duality, which asks for planar graphs \(G_1\) and \(G_2\) whether \(G_1\) can be embedded such that its dual is isomorphic to \(G_2\). We show NPcompleteness for general graphs and give a lineartime algorithm for biconnected graphs. We consider the common dual relation ~, where \(G_1\) ~ \(G_2\) if and only they admit embeddings that result in the same dual graph. We show that ~ is an equivalence relation on the set of biconnected graphs and devise a succinct, SPQRtreelike representation of its equivalence classes. To solve Mutual Planar Duality for biconnected graphs, we show how to do isomorphism testing for two such representations in linear time. A special case of Mutual Planar Duality is testing whether a graph is selfdual. Our algorithm can handle the case of biconnected graphs in linear time and our NPhardness proof extends to selfduality and also to map selfduality testing (which additionally requires to preserve the embedding).

Andrew M. Sutton, Neumann, Frank Runtime Analysis of Evolutionary Algorithms on Randomly Constructed HighDensity Satisfiable 3CNF Formulas. Parallel Problem Solving from Nature (PPSN) 2014: 942951
We show that simple mutationonly evolutionary algorithms find a satisfying assignment on two similar models of random planted 3CNF Boolean formulas in polynomial time with high probability in the high constraint density regime. We extend the analysis to random formulas conditioned on satisfiability (i.e., the socalled filtered distribution) and conclude that most highdensity satisfiable formulas are easy for simple evolutionary algorithms. With this paper, we contribute the first rigorous study of randomized search heuristics from the evolutionary computation community on wellstudied distributions of random satisfiability problems.

Christian Gießen, Kötzing, Timo Robustness of populations in stochastic environments. Genetic and Evolutionary Computation Conference (GECCO) 2014: 13831390
We consider stochastic versions of OneMax and LeadingOnes and analyze the performance of evolutionary algorithms with and without populations on these problems. It is known that the (1+1) EA on OneMax performs well in the presence of very small noise, but poorly for higher noise levels. We extend these results to LeadingOnes and to many different noise models, showing how the application of drift theory can significantly simplify and generalize previous analyses. Most surprisingly, even small populations (of size \(\Theta(\log n)\)) can make evolutionary algorithms perform well for high noise levels, well outside the abilities of the (1+1) EA. Larger population sizes are even more beneficial; we consider both parent and offspring populations. In this sense, populations are robust in these stochastic settings.

Benjamin Doerr, Friedrich, Tobias, Sauerwald, Thomas Quasirandom Rumor Spreading. Transactions on Algorithms 2014: 9:19:35
We propose and analyze a quasirandom analogue of the classical push model for disseminating information in networks ("randomized rumor spreading"). In the classical model, in each round, each informed vertex chooses a neighbor at random and informs it, if it was not informed before. It is known that this simple protocol succeeds in spreading a rumor from one vertex to all others within \(O(\log n)\) rounds on complete graphs, hypercubes, random regular graphs, ErdHosRényi random graphs, and Ramanujan graphs with probability \(1  o(1)\). In the quasirandom model, we assume that each vertex has a (cyclic) list of its neighbors. Once informed, it starts at a random position on the list, but from then on informs its neighbors in the order of the list. Surprisingly, irrespective of the orders of the lists, the abovementioned bounds still hold. In some cases, even better bounds than for the classical model can be shown.

Andrew M. Sutton, Neumann, Frank, Nallaperuma, Samadhi Parameterized Runtime Analyses of Evolutionary Algorithms for the Planar Euclidean Traveling Salesperson Problem. Evolutionary Computation 2014: 595628
Parameterized runtime analysis seeks to understand the influence of problem structure on algorithmic runtime. In this paper, we contribute to the theoretical understanding of evolutionary algorithms and carry out a parameterized analysis of evolutionary algorithms for the Euclidean traveling salesperson problem (Euclidean TSP). We investigate the structural properties in TSP instances that influence the optimization process of evolutionary algorithms and use this information to bound their runtime. We analyze the runtime in dependence of the number of inner points \(k\). In the first part of the paper, we study a \((\mu + \lambda)\) EA in a strictly black box setting and show that it can solve the Euclidean TSP in expected time \(O(n cdot A(\epsilon) cdot max\(\mu / \lambda) cdot n^2 , 1 + (\mu / \lambda) cdot n^4k(2k  1)!)\) where \(A\) is a function of the minimum angle \(\epsilon\) between any three points. Based on insights provided by the analysis, we improve this upper bound by introducing a mixed mutation strategy that incorporates both 2opt moves and permutation jumps. This strategy improves the upper bound to \(O(n cdot A(\epsilon) cdot max\(\mu / \lambda) cdot n^2 , 1 + (\mu / \lambda) cdot n^2k(k  1)!)\). In the second part of the paper, we use the information gained in the analysis to incorporate domain knowledge to design two fixedparameter tractable (FPT) evolutionary algorithms for the planar Euclidean TSP. We first develop a \((\mu + \lambda)\) EA based on an analysis by M. Theile, 2009, "Exact solutions to the traveling salesperson problem by a populationbased evolutionary algorithm," Lecture notes in computer science, Vol. 5482 (pp. 145155), that solves the TSP with \(k\) inner points in \(O(max\2^kk^2n^2\lambda^1,n\)\) generations with probability \(1  e^\Omega(n)\). We then design a \((\mu + \lambda)\) EA that incorporates a dynamic programming step into the fitness evaluation. We prove that a variant of this evolutionary algorithm using 2opt mutation solves the problem after \(O(max\(k  2)!k^2k2\lambda^1, 1\)\) steps in expectation with a cost of \(O(nk)\) for each fitness evaluation.

Thomas Bläsius, Krug, Marcus, Rutter, Ignaz, Wagner, Dorothea Orthogonal Graph Drawing with Flexibility Constraints. Algorithmica 2014: 859885
Traditionally, the quality of orthogonal planar drawings is quantified by either the total number of bends, or the maximum number of bends per edge. However, this neglects that in typical applications, edges have varying importance. In this work, we investigate an approach that allows to specify the maximum number of bends for each edge individually, depending on its importance. We consider a new problem called FlexDraw that is defined as follows. Given a planar graph \(G=(V,E)\) on \(n\) vertices with maximum degree 4 and a function \(flex \colon E \rightarrow N_0\) that assigns a flexibility to each edge, does \(G\) admit a planar embedding on the grid such that each edge \(e\) has at most \(flex(e)\) bends? Note that in our setting the combinatorial embedding of \(G\) is not fixed. FlexDraw directly extends the problem \(\beta\)embeddability asking whether \(G\) can be embedded with at most \(\beta\) bends per edge. We give an algorithm with runningtime \(O(n^2)\) solving FlexDraw when the flexibility of each edge is positive. This includes 1embeddability as a special case and thus closes the complexity gap between 0embeddability, which is NPhard to decide, and 2embeddability, which is efficiently solvable since every planar graph with maximum degree 4 admits a 2embedding except for the octahedron. In addition to the polynomialtime algorithm we show that FlexDraw is NPhard even if the edges with flexibility 0 induce a tree or a union of disjoint stars.

Sanjay Jain, Kötzing, Timo, Ma, Junqi, Stephan, Frank On the Role of Update Constraints and TextTypes in Iterative Learning. Algorithmic Learning Theory (ALT) 2014: 5569
The present work investigates the relationship of iterative learning with other learning criteria such as decisiveness, caution, reliability, nonUshapedness, monotonicity, strong monotonicity and conservativeness. Building on the result of Case and Moelius that iterative learners can be made nonUshaped, we show that they also can be made cautious and decisive. Furthermore, we obtain various special results with respect to oneone texts, fat texts and oneone hypothesis spaces.

Pascal Lenzner On selfish network creation. 2014

Tobias Friedrich, Neumann, Frank Maximizing Submodular Functions under Matroid Constraints by Multiobjective Evolutionary Algorithms. Parallel Problem Solving from Nature (PPSN) 2014: 922931
Many combinatorial optimization problems have underlying goal functions that are submodular. The classical goal is to find a good solution for a given submodular function f under a given set of constraints. In this paper, we investigate the runtime of a multiobjective evolutionary algorithm called GSEMO until it has obtained a good approximation for submodular functions. For the case of monotone submodular functions and uniform cardinality constraints we show that GSEMO achieves a \((1  1/e)\)approximation in expected time \(O(n^2(\log n+k))\), where \(k\) is the value of the given constraint. For the case of nonmonotone submodular functions with \(k\) matroid intersection constraints, we show that GSEMO achieves a \((1/(k + 2 + 1/k + \epsilon)\)approximation in expected time \(O(n^k+5\log(n)/\epsilon)\).

Timo Kötzing Iterative learning from positive data and counters. Theoretical Computer Science 2014: 155169
We analyze iterative learning in the limit from positive data with the additional information provided by a counter. The simplest type of counter provides the current iteration number (counting up from 0 to infinity), which is known to improve learning power over plain iterative learning. We introduce five other (weaker) counter types, for example only providing some unbounded and nondecreasing sequence of numbers. Analyzing these types allows for understanding what properties of a counter can benefit learning. For the iterative setting, we completely characterize the relative power of the learning criteria corresponding to the counter types. In particular, for our types, the only properties improving learning power are unboundedness and strict monotonicity. Furthermore, we show that each of our types of counter improves learning power over weaker ones in some settings, and that, for iterative learning criteria with one of these types of counter, separations of learning criteria are necessarily witnessed by classes containing only infinite languages.

Tobias Friedrich, Rowe, Jonathan E. Genetic and Evolutionary Computation. Theoretical Computer Science 2014: 1

Karl Bringmann, Friedrich, Tobias, Klitzke, Patrick Generic Postprocessing via Subset Selection for Hypervolume and EpsilonIndicator. Parallel Problem Solving from Nature (PPSN) 2014: 518527
Most biobjective evolutionary algorithms maintain a population of fixed size \(\mu\) and return the final population at termination. During the optimization process many solutions are considered, but most are discarded. We present two generic postprocessing algorithms which utilize the archive of all nondominated solutions evaluated during the search. We choose the best \(\mu\) solutions from the archive such that the hypervolume or \(\epsilon\)indicator is maximized. This postprocessing costs no additional fitness function evaluations and has negligible runtime compared to most EMOAs. We experimentally examine our postprocessing for four standard algorithms (NSGAII, SPEA2, SMSEMOA, IBEA) on ten standard test functions (DTLZ 12,7, ZDT 13, WFG 36) and measure the average quality improvement. The median decrease of the distance to the optimal \(\epsilon\)indicator is \(95\%\), the median decrease of the distance to the optimal hypervolume value is \(86\%\). We observe similar performance on a realworld problem (wind turbine placement).

Francisco Chicano, Whitley, Darrell, Sutton, Andrew M. Efficient identification of improving moves in a ball for pseudoboolean problems. Genetic and Evolutionary Computation Conference (GECCO) 2014: 437444
Hill climbing algorithms are at the core of many approaches to solve optimization problems. Such algorithms usually require the complete enumeration of a neighborhood of the current solution. In the case of problems defined over binary strings of length \(n\), we define the \(r\)ball neighborhood as the set of solutions at Hamming distance \(r\) or less from the current solution. For \(r \ll n\) this neighborhood contains \(\Theta(n^r)\) solutions. In this paper efficient methods areintroduced to locate improving moves in the \(r\)ball neighborhood for problems that can be written as a sum of a linear number of subfunctions depending on a bounded number of variables. NKlandscapes and MAX\(k\)SAT are examples of these problems. If the number of subfunctions depending on any given variable is also bounded, then we prove that the method can explore the neighborhood in constant time, despite the fact that the number of solutions in the neighborhood is polynomial in \(n\). We develop a hill climber based on our exploration method and we analyze its efficiency and efficacy using experiments with NKqlandscapes instances.

Karl Bringmann, Friedrich, Tobias, Krohmer, Anton Deanonymization of Heterogeneous Random Graphs in Quasilinear Time. European Symposium on Algorithms (ESA) 2014: 197208
There are hundreds of online social networks with billions of users in total. Many such networks publicly release structural information, with all personal information removed. Empirical studies have shown, however, that this provides a false sense of privacy  it is possible to identify almost all users that appear in two such anonymized network as long as a few initial mappings are known. We analyze this problem theoretically by reconciling two versions of an artificial powerlaw network arising from independent subsampling of vertices and edges. We present a new algorithm that identifies most vertices and makes no wrong identifications with high probability. The number of vertices matched is shown to be asymptotically optimal. For an \(n\)vertex graph, our algorithm uses \(n^\epsilon\) seed nodes (for an arbitrarily small \(\epsilon\)) and runs in quasilinear time. This improves previous theoretical results which need \(\Theta(n)\) seed nodes and have runtimes of order \(n^1 + \Omega(1)\). Additionally, the applicability of our algorithm is studied experimentally on different networks.

Andreas Göbel, Goldberg, Leslie Ann, McQuillan, Colin, Richerby, David, Yamakami, Tomoyuki Counting List Matrix Partitions of Graphs. Conference on Computational Complexity (CCC) 2014: 5665
Given a symmetric \(D \times D\) matrix \(M\) over \(0, 1, *\), a list \(M\)partition of a graph \(G\) is a partition of the vertices of \(G\) into \(D\) parts which are associated with the rows of \(M\). The part of each vertex is chosen from a given list in such a way that no edge of \(G\) is mapped to a 0 in \(M\) and no nonedge of \(G\) is mapped to a 1 in \(M\). Many important graphtheoretic structures can be represented as list \(M\)partitions including graph colourings, split graphs and homogeneous sets, which arise in the proofs of the weak and strong perfect graph conjectures. Thus, there has been quite a bit of work on determining for which matrices \(M\) computations involving list \(M\)partitions are tractable. This paper focuses on the problem of counting list \(M\)partitions, given a graph \(G\) and given lists for each vertex of \(G\). We give an algorithm that solves this problem in polynomial time for every (fixed) matrix \(M\) for which the problem is tractable. The algorithm relies on data structures such as sparsedense partitions and sub cube decompositions to reduce each problem instance to a sequence of problem instances in which the lists have a certain useful structure that restricts access to portions of \(M\) in which the interactions of 0s and 1s is controlled. We show how to solve the resulting restricted instances by converting them into particular counting constraint satisfaction problems (\#CSPs) which we show how to solve using a constraint satisfaction technique known as "arcconsistency". For every matrix \(M\) for which our algorithm fails, we show that the problem of counting list \(M\)partitions is #Pcomplete. Furthermore, we give an explicit characterisation of the dichotomy theorem  counting list \(M\)partitions is tractable (in FP) if and only if the matrix \(M\) has a structure called a derectangularising sequence. Finally, we show that the metaproblem of determining whether a given matrix has a derectangularising sequence is NPcomplete.

Andreas Göbel, Goldberg, Leslie Ann, Richerby, David Counting Homomorphisms to Cactus Graphs Modulo 2. Symposium on Theoretical Aspects of Computer Science (STACS) 2014: 350361
A homomorphism from a graph \(G\) to a graph \(H\) is a function from \(V(G)\) to \(V(H)\) that preserves edges. Many combinatorial structures that arise in mathematics and in computer science can be represented naturally as graph homomorphisms and as weighted sums of graph homomorphisms. In this article, we study the complexity of counting homomorphisms modulo 2. The complexity of modular counting was introduced by Papadimitriou and Zachos and it has been pioneered by Valiant who famously introduced a problem for which counting modulo 7 is easy but counting modulo 2 is intractable. Modular counting provides a rich setting in which to study the structure of homomorphism problems. In this case, the structure of the graph \(H\) has a big influence on the complexity of the problem. Thus, our approach is graphtheoretic. We give a complete solution for the class of cactus graphs, which are connected graphs in which every edge belongs to at most one cycle. Cactus graphs arise in many applications such as the modelling of wireless sensor networks and the comparison of genomes. We show that, for some cactus graphs \(H\), counting homomorphisms to \(H\) modulo 2 can be done in polynomial time. For every other fixed cactus graph \(H\), the problem is complete in the complexity class \(\oplus P\), which is a wide complexity class to which every problem in the polynomial hierarchy can be reduced (using randomised reductions). Determining which \(H\) lead to tractable problems can be done in polynomial time. Our result builds upon the work of Faben and Jerrum, who gave a dichotomy for the case in which \(H\) is a tree.

Karl Bringmann, Friedrich, Tobias Convergence of HypervolumeBased Archiving Algorithms. Transactions on Evolutionary Computation 2014: 643657
Multiobjective evolutionary algorithms typically maintain a set of solutions. A crucial part of these algorithms is the archiving, which decides what solutions to keep. A \((\mu + \lambda)\) archiving algorithm defines how to choose in each generation \(\mu\) children from \(\mu\) parents and \(\lambda\) offspring together. We study mathematically the convergence behavior of hypervolumebased archiving algorithms. We distinguish two cases for the offspring generation. A bestcase view leads to a study of the effectiveness of archiving algorithms. It was known that all \((\mu + 1)\)archiving algorithms are ineffective, which means that a set with maximum hypervolume is not necessarily reached. We prove that for \(\lambda < \mu\), all archiving algorithms are ineffective. We also present upper and lower bounds for the achievable hypervolume for different classes of archiving algorithms. On the other hand, a worstcase view on the offspring generation leads to a study of the competitive ratio of archiving algorithms. This measures how much smaller hypervolumes are achieved due to not knowing the future offspring in advance. We present upper and lower bounds on the competitive ratio of different archiving algorithms and present an archiving algorithm, which is the first known computationally efficient archiving algorithm with constant competitive ratio.

Timo Kötzing Concentration of first hitting times under additive drift. Genetic and Evolutionary Computation Conference (GECCO) 2014: 13911398
Recent advances in drift analysis have given us better and better tools for understanding random processes, including the run time of randomized search heuristics. In the setting of multiplicative drift we do not only have excellent bounds on the expected run time, but also more general results showing the concentration of the run time. In this paper we investigate the setting of additive drift under the assumption of strong concentration of the "step size" of the process. Under sufficiently strong drift towards the goal we show a strong concentration of the hitting time. In contrast to this, we show that in the presence of small drift a Gambler'sRuinlike behavior of the process overrides the influence of the drift. Finally, in the presence of sufficiently strong negative drift the hitting time is superpolynomial with high probability; this corresponds to the socalled negative drift theorem, for which we give new variants.

Thomas Bläsius, Brückner, Guido, Rutter, Ignaz Complexity of HigherDegree Orthogonal Graph Embedding in the Kandinsky Model. European Symposium on Algorithms (ESA) 2014: 161172
We show that finding orthogonal gridembeddings of plane graphs (planar with fixed combinatorial embedding) with the minimum number of bends in the socalled Kandinsky model (which allows vertices of degree >4) is NPcomplete, thus solving a longstanding open problem. On the positive side, we give an efficient algorithm for several restricted variants, such as graphs of bounded branch width and a subexponential exact algorithm for general plane graphs.

Timo Kötzing A Solution to Wiehagen's Thesis. Symposium on Theoretical Aspects of Computer Science (STACS) 2014: 494505
Wiehagen's Thesis in Inductive Inference (1991) essentially states that, for each learning criterion, learning can be done in a normalized, enumerative way. The thesis was not a formal statement and thus did not allow for a formal proof, but support was given by examples of a number of different learning criteria that can be learned by enumeration. Building on recent formalizations of learning criteria, we are now able to formalize Wiehagen's Thesis. We prove the thesis for a wide range of learning criteria, including many popular criteria from the literature. We also show the limitations of the thesis by giving four learning criteria for which the thesis does not hold (and, in two cases, was probably not meant to hold). Beyond the original formulation of the thesis, we also prove stronger versions which allow for many corollaries relating to strongly decisive and conservative learning.

Thomas Bläsius, Rutter, Ignaz A New Perspective on Clustered Planarity as a Combinatorial Embedding Problem. Graph Drawing (GD) 2014: 440451
The clustered planarity problem (\(c\)planarity) asks whether a hierarchically clustered graph admits a planar drawing such that the clusters can be nicely represented by regions. We introduce the \(cd\)tree data structure and give a new characterization of \(c\)planarity. It leads to efficient algorithms for \(c\)planarity testing in the following cases. (i) Every cluster and every cocluster has at most two connected components. (ii) Every cluster has at most five outgoing edges. Moreover, the cdtree reveals interesting connections between \(c\)planarity and planarity with constraints on the order of edges around vertices. On one hand, this gives rise to a bunch of new open problems related to \(c\)planarity, on the other hand it provides a new perspective on previous results.

Timo Kötzing, Palenta, Raphaela A Map of Update Constraints in Inductive Inference. Algorithmic Learning Theory (ALT) 2014: 4054
We investigate how different learning restrictions reduce learning power and how the different restrictions relate to one another. We give a complete map for nine different restrictions both for the cases of complete information learning and setdriven learning. This completes the picture for these wellstudied delayable learning restrictions. A further insight is gained by different characterizations of conservative learning in terms of variants of cautious learning. Our analyses greatly benefit from general theorems we give, for example showing that learners with exclusively delayable restrictions can always be assumed total.