
Friedrich, Tobias; Kötzing, Timo; Krejca, Martin S.; Sutton, Andrew M. The Compact Genetic Algorithm is Efficient under Extreme Gaussian Noise. IEEE Transactions on Evolutionary Computation 2017
Practical optimization problems frequently include uncertainty about the quality measure, for example due to noisy evaluations. Thus, they do not allow for a straightforward application of traditional optimization techniques. In these settings, randomized search heuristics such as evolutionary algorithms are a popular choice because they are often assumed to exhibit some kind of resistance to noise. Empirical evidence suggests that some algorithms, such as estimation of distribution algorithms (EDAs) are robust against a scaling of the noise intensity, even without resorting to explicit noisehandling techniques such as resampling. In this paper, we want to support such claims with mathematical rigor. We introduce the concept of graceful scaling in which the run time of an algorithm scales polynomially with noise intensity. We study a monotone fitness function over binary strings with additive noise taken from a Gaussian distribution. We show that myopic heuristics cannot efficiently optimize the function under arbitrarily intense noise without any explicit noisehandling. Furthermore, we prove that using a population does not help. Finally we show that a simple EDA called the compact Genetic Algorithm can overcome the shortsightedness of mutationonly heuristics to scale gracefully with noise. We conjecture that recombinative genetic algorithms also have this property.

Friedrich, Tobias; Neumann, Frank What’s Hot in Evolutionary Computation. Conference on Artificial Intelligence (AAAI) 2017
We provide a brief overview on some hot topics in the area of evolutionary computation. Our main focus is on recent developments in the areas of combinatorial optimization and realworld applications. Furthermore, we highlight recent progress on the theoretical understanding of evolutionary computing methods.

Friedrich, Tobias; Krohmer, Anton; Rothenberger, Ralf; Sutton, Andrew M. Phase Transitions for ScaleFree SAT Formulas. Conference on Artificial Intelligence (AAAI) 2017
Recently, a number of nonuniform random satisfiability models have been proposed that are closer to practical satisfiability problems in some characteristics. In contrast to uniform random Boolean formulas, scalefree formulas have a variable occurrence distribution that follows a power law. It has been conjectured that such a distribution is a more accurate model for some industrial instances than the uniform random model. Though it seems that there is already an awareness of a threshold phenomenon in such models, there is still a complete picture lacking. In contrast to the uniform model, the critical density threshold does not lie at a single point, but instead exhibits a functional dependency on the powerlaw exponent. For scalefree formulas with clauses of length \(k = 2\), we give a lower bound on the phase transition threshold as a function of the scaling parameter. We also perform computational studies that suggest our bound is tight and investigate the critical density for formulas with higher clause lengths. Similar to the uniform model, on formulas with \(k \ge 3\), we find that the phase transition regime corresponds to a set of formulas that are difficult to solve by backtracking search.

Friedrich, Tobias; Kötzing, Timo; Quinzan, Francesco; Sutton, Andrew Michael Resampling vs Recombination: a Statistical Run Time Estimation. Foundations of Genetic Algorithms (FOGA) 2017
Noise is pervasive in realworld optimization, but there is still little understanding of the interplay between the operators of randomized search heuristics and explicit noisehandling techniques, such as statistical resampling. In this paper, we report on several statistical models and theoretical results that help to clarify this reciprocal relationship for a collection of randomized search heuristics on noisy functions. We consider the optimization of pseudoBoolean functions under additive posterior Gaussian noise and explore the tradeo between noise reduction and the computational cost of resampling. We first perform experiments to find the optimal parameters at a given noise intensity for a mutationonly evolutionary algorithm, a genetic algorithm employing recombination, an estimation of distribution algorithm (EDA), and an ant colony optimization algorithm. We then observe how the optimal parameter depends on the noise intensity for the different algorithms. Finally, we locate the point where statistical resampling costs more than it is worth in terms of run time. We find that the EA requires the highest number of resamples to obtain the best speedup, whereas crossover reduces both the run time and the number of resamples required. Most surprisingly, we find that EDAlike algorithms require no resampling, and can handle noise implicitly.

Pourhassan, Mojgan; Friedrich, Tobias; Neumann, Frank On the Use of the Dual Formulation for Minimum Vertex Cover in Evolutionary Algorithms. Foundations of Genetic Algorithms (FOGA) 2017
We consider the weighted minimum vertex cover problem and investigate how its dual formulation can be exploited to design evolutionary algorithms that provably obtain a 2approximation. Investigating multivalued representations, we show that variants of randomized local search and the (1+1) EA achieve this goal in expected pseudopolynomial time. In order to speed up the process, we consider the use of step size adaptation in both algorithms and show that RLS obtains a 2approximation in expected polynomial time while the (1+1) EA still encounters a pseudopolynomial lower bound.

Friedrich, Tobias; Kötzing, Timo; Wagner, Markus A Simple Betandrun Strategy for Speeding Up Traveling Salesperson and Minimum Vertex Cover. Conference on Artificial Intelligence (AAAI) 2017
A common strategy for improving optimization algorithms is to restart the algorithm when it is believed to be trapped in an inferior part of the search space. However, while specific restart strategies have been developed for specific problems (and specific algorithms), restarts are typically not regarded as a general tool to speed up an optimization algorithm. In fact, many optimization algorithms do not employ restarts at all. Recently, betandrun was introduced in the context of mixedinteger programming, where first a number of short runs with randomized initial conditions is made, and then the most promising run of these is continued. In this article, we consider two classical NPcomplete combinatorial optimization problems, traveling salesperson and minimum vertex cover, and study the effectiveness of different betandrun strategies. In particular, our restart strategies do not take any problem knowledge into account, nor are tailored to the optimization algorithm. Therefore, they can be used offtheshelf. We observe that stateoftheart solvers for these problems can benefit significantly from restarts on standard benchmark instances.

Friedrich, Tobias; Kötzing, Timo; Lagodzinski, J. A. Gregor; Neumann, Frank; Schirneck, Martin Analysis of the (1+1) EA on Subclasses of Linear Functions under Uniform and Linear Constraints. Foundations of Genetic Algorithms (FOGA) 2017
Linear functions have gained a lot of attention in the area of run time analysis of evolutionary computation methods and the corresponding analyses have provided many effective tools for analyzing more complex problems. In this paper, we consider the behavior of the classical (1+1) Evolutionary Algorithm for linear functions under linear constraint. We show tight bounds in the case where both the objective function and the constraint is given by the OneMax function and present upper bounds as well as lower bounds for the general case. Furthermore, we also consider the LeadingOnes fitness function.

Arndt, Tobias; Hafner, Danijar; Kellermeier, Thomas; Krogmann, Simon; Razmjou, Armin; Krejca, Martin S.; Rothernberger, Ralf; Friedrich, Tobias Probabilistic Routing for OnStreet Parking Search. European Symposium on Algorithms (ESA) 2016: 6:16:13
An estimated 30% of urban traffic is caused by search for parking spots [10]. Traffic could be reduced by suggesting effective routes leading along potential parking spots. In this paper, we formalize parking search as a probabilistic problem on a road graph and show that it is NPcomplete. We explore heuristics that optimize for the driving duration and the walking distance to the destination. Routes are constrained to reach a certain probability threshold of finding a spot. Empirically estimated probabilities of successful parking attempts are provided by TomTom on a perstreet basis. We release these probabilities as a dataset of about 80,000 roads covering the Berlin area. This allows to evaluate parking search algorithms on a real road network with realistic probabilities for the first time. However, for many other areas, parking probabilities are not openly available. Because they are effortful to collect, we propose an algorithm that relies on conventional road attributes only. Our experiments show that this algorithm comes close to the baseline by a factor of 1.3 in our cost measure. This leads to the conclusion that conventional road attributes may be sufficient to compute reasonably good parking search routes.

Friedrich, Tobias; Kötzing, Timo; Krejca, Martin S. EDAs cannot be Balanced and Stable. Genetic and Evolutionary Computation Conference (GECCO) 2016: 11391146
Estimation of Distribution Algorithms (EDAs) work by iteratively updating a distribution over the search space with the help of samples from each iteration. Up to now, theoretical analyses of EDAs are scarce and present run time results for specific EDAs. We propose a new framework for EDAs that captures the idea of several known optimizers, including PBIL, UMDA, \(\lambda\)MMASIB, cGA, and \((1,\lambda)\)EA. Our focus is on analyzing two core features of EDAs: a balanced EDA is sensitive to signals in the fitness; a stable EDA remains uncommitted under a biasless fitness function. We prove that no EDA can be both balanced and stable. The LeadingOnes function is a prime example where, at the beginning of the optimization, the fitness function shows no bias for many bits. Since many wellknown EDAs are balanced and thus not stable, they are not wellsuited to optimize LeadingOnes. We give a stable EDA which optimizes LeadingOnes within a time of \(O(n log n)\).

Bläsius, Thomas; Friedrich, Tobias; Krohmer, Anton Hyperbolic Random Graphs: Separators and Treewidth. European Symposium on Algorithms (ESA) 2016: 15:115:16
When designing and analyzing algorithms, one can obtain better and more realistic results for practical instances by assuming a certain probability distribution on the input. The worstcase runtime is then replaced by the expected runtime or by bounds that hold with high probability (whp), i.e., with probability \(1 \neq O(1/n)\), on the random input. Hyperbolic random graphs can be used to model complex realworld networks as they share many important properties such as a small diameter, a large clustering coefficient, and a powerlaw degreedistribution. Divide and conquer is an important algorithmic design principle that works particularly well if the instance admits small separators. We show that hyperbolic random graphs in fact have comparatively small separators. More precisely, we show that a hyperbolic random graph can be expected to have a balanced separator hierarchy with separators of size \(O(\sqrt(n^(3\beta)))\), \(O(\log n)\), and \(O(1)\) if \(2 < \beta < 3\), \(\beta = 3\) and \(3 < \beta\), respectively (\(\beta\) is the powerlaw exponent). We infer that these graphs have whp a treewidth of \(O(\sqrt(n^(3  \beta)))\), \(O(\log^2n)\), and \(O(\log n)\), respectively. For \(2 < \beta < 3\), this matches a known lower bound. For the more realistic (but harder to analyze) binomial model, we still prove a sublinear bound on the treewidth. To demonstrate the usefulness of our results, we apply them to obtain fast matching algorithms and an approximation scheme for Independent Set.

Friedrich, Tobias; Kötzing, Timo; Krejca, Martin S.; Sutton, Andrew M. The Benefit of Recombination in Noisy Evolutionary Search. Genetic and Evolutionary Computation Conference (GECCO) 2016: 161162
Practical optimization problems frequently include uncertainty about the quality measure, for example due to noisy evaluations. Thus, they do not allow for a straightforward application of traditional optimization techniques. In these settings, randomized search heuristics such as evolutionary algorithms are a popular choice because they are often assumed to exhibit some kind of resistance to noise. Empirical evidence suggests that some algorithms, such as estimation of distribution algorithms (EDAs) are robust against a scaling of the noise intensity, even without resorting to explicit noisehandling techniques such as resampling. In this paper, we want to support such claims with mathematical rigor. We introduce the concept of graceful scaling in which the run time of an algorithm scales polynomially with noise intensity. We study a monotone fitness function over binary strings with additive noise taken from a Gaussian distribution. We show that myopic heuristics cannot efficiently optimize the function under arbitrarily intense noise without any explicit noisehandling. Furthermore, we prove that using a population does not help. Finally we show that a simple EDA called the Compact Genetic Algorithm can overcome the shortsightedness of mutationonly heuristics to scale gracefully with noise. We conjecture that recombinative genetic algorithms also have this property.

Chauhan, Ankit; Friedrich, Tobias; Rothenberger, Ralf Greed is Good for Deterministic ScaleFree Networks. Foundations of Software Technology and Theoretical Computer Science (FSTTCS) 2016
Large realworld networks typically follow a powerlaw degree distribution. To study such networks, numerous random graph models have been proposed. However, realworld networks are not drawn at random. Therefore, Brach, Cygan, Lacki, and Sankowski [SODA 2016] introduced two natural deterministic conditions: (1) a powerlaw upper bound on the degree distribution (PLBU) and (2) powerlaw neighborhoods, that is, the degree distribution of neighbors of each vertex is also upper bounded by a power law (PLBN). They showed that many realworld networks satisfy both deterministic properties and exploit them to design faster algorithms for a number of classical graph problems. We complement the work of Brach et al. by showing that some wellstudied random graph models exhibit both the mentioned PLB properties and additionally also a powerlaw lower bound on the degree distribution (PLBL). All three properties hold with high probability for ChungLu Random Graphs and Geometric Inhomogeneous Random Graphs and almost surely for Hyperbolic Random Graphs. As a consequence, all results of Brach et al. also hold with high probability or almost surely for those random graph classes. In the second part of this work we study three classical NPhard combinatorial optimization problems on PLB networks. It is known that on general graphs with maximum degree \(\Delta\), a greedy algorithm, which chooses nodes in the order of their degree, only achieves a \(\Omega(ln \Delta)\)approximation for Minimum Vertex Cover and Minimum Dominating Set, and a \(\Omega(\Delta)\)approximation for Maximum Independent Set. We prove that the PLBU property suffices for the greedy approach to achieve a constantfactor approximation for all three problems. We also show that all three combinatorial optimization problems are APXcomplete even if all PLBproperties holds hence, PTAS cannot be expected unless P=NP.

Friedrich, Tobias ScaleFree Networks, Hyperbolic Geometry and Efficient Algorithms. Mathematical Foundations of Computer Science (MFCS) 2016: 4:1  4:3
The node degrees of large realworld networks often follow a powerlaw distribution. Such scalefree networks can be social networks, internet topologies, the web graph, power grids, or many other networks from literally hundreds of domains. The talk will introduce several mathematical models of scalefree networks (e.g. preferential attachment graphs, ChungLu graphs, hyperbolic random graphs) and analyze some of their properties (e.g. diameter, average distance, clustering). We then present several algorithms and distributed processes on and for these network models (e.g. rumor spreading, load balancing, deanonymization, embedding) and discuss a number of open problems. The talk assumes no prior knowledge about scalefree networks, distributed computing or hyperbolic geometry.

Friedrich, Tobias; Kötzing, Timo; Quinzan, Francesco; Sutton, Andrew M. Ant Colony Optimization Beats Resampling on Noisy Functions. Genetic and Evolutionary Computation Conference (GECCO) 2016: 34
Despite the pervasiveness of noise in realworld optimization, there is little understanding of the interplay between the operators of randomized search heuristics and explicit noisehandling techniques such as statistical resampling. Ant Colony Optimization (ACO) algorithms are claimed to be particularly wellsuited to dynamic and noisy problems, even without explicit noisehandling techniques. In this work, we empirically investigate the tradeoffs between resampling an the noisehandling abilities of ACO algorithms. Our main focus is to locate the point where resampling costs more than it is worth.

Bläsius, Thomas; Friedrich, Tobias; Krohmer, Anton; Laue, Sören Efficient Embedding of ScaleFree Graphs in the Hyperbolic Plane. European Symposium on Algorithms (ESA) 2016: 16:116:18
Hyperbolic geometry appears to be intrinsic in many large real networks. We construct and implement a new maximum likelihood estimation algorithm that embeds scalefree graphs in the hyperbolic space. All previous approaches of similar embedding algorithms require a runtime of \(\Omega(n^2)\). Our algorithm achieves quasilinear runtime, which makes it the first algorithm that can embed networks with hundreds of thousands of nodes in less than one hour. We demonstrate the performance of our algorithm on artificial and real networks. In all typical metrics like Loglikelihood and greedy routing our algorithm discovers embeddings that are very close to the ground truth.

Dang, DucCuong; Friedrich, Tobias; Krejca, Martin S.; Kötzing, Timo; Lehre, Per Kristian; Oliveto, Pietro S.; Sudholt, Dirk; Sutton, Andrew Michael Escaping Local Optima with Diversity Mechanisms and Crossover. Genetic and Evolutionary Computation Conference (GECCO) 2016: 645652
Population diversity is essential for the effective use of any crossover operator. We compare seven commonly used diversity mechanisms and prove rigorous run time bounds for the \((\mu+1)\) GA using uniform crossover on the fitness function \(Jump_k\). All previous results in this context only hold for unrealistically low crossover probability \(p_c=O(k/n)\), while we give analyses for the setting of constant \(p_c < 1\) in all but one case. Our bounds show a dependence on the problem size \(n\), the jump length \(k\), the population size \(\mu\), and the crossover probability \(p_c\). For the typical case of constant \(k > 2\) and constant \(p_c\), we can compare the resulting expected optimisation times for different diversity mechanisms assuming an optimal choice of \(\mu\): \(O(n^k1)\) for duplicate elimination/minimisation, \(O(n^2 \log n)\) for maximising the convex hull, \(O(n \log n)\) for det. crowding (assuming \(p_c = k/n\)), \(O(n \log n)\) for maximising the Hamming distance, \(O(n \log n)\) for fitness sharing, \(O(n \log n)\) for the singlereceiver island model. This proves a sizeable advantage of all variants of the \((\mu+1)\) GA compared to the (1+1) EA, which requires \(\Theta(n^k)\). In a short empirical study we confirm that the asymptotic differences can also be observed experimentally.

Dang, DucCuong; Lehre, Per Kristian; Friedrich, Tobias; Kötzing, Timo; Krejca, Martin S.; Oliveto, Pietro S.; Sudholt, Dirk; Sutton, Andrew M. Emergence of Diversity and its Benefits for Crossover in Genetic Algorithms. Parallel Problem Solving From Nature (PPSN) 2016: 890900
Population diversity is essential for avoiding premature convergence in Genetic Algorithms (GAs) and for the effective use of crossover. Yet the dynamics of how diversity emerges in populations are not well understood. We use rigorous runtime analysis to gain insight into population dynamics and GA performance for a standard \((\mu+1)\) GA and the \(Jump_k\) test function. By studying the stochastic process underlying the size of the largest collection of identical genotypes we show that the interplay of crossover followed by mutation may serve as a catalyst leading to a sudden burst of diversity. This leads to improvements of the expected optimisation time of order \(\Omega(n/ \log n)\) compared to mutationonly algorithms like the \((1+1)\) EA.

Friedrich, Tobias; Kötzing, Timo; Sutton, Andrew M. On the Robustness of Evolving Populations. Parallel Problem Solving From Nature (PPSN) 2016: 771781
Most theoretical work that studies the benefit of recombination focuses on the ability of crossover to speed up optimization time on specific search problems. In this paper, we take a slightly different perspective and investigate recombination in the context of evolving solutions that exhibit \(\emphmutational\) robustness, i.e., they display insensitivity to small perturbations. Various models in population genetics have demonstrated that increasing the effective recombination rate promotes the evolution of robustness. We show this result also holds in the context of evolutionary computation by proving crossover promotes the evolution of robust solutions in the standard \((\mu+1)\) GA. Surprisingly, our results show that the effect is present even when robust solutions are at a selective disadvantage due to lower fitness values.

Friedrich, Tobias; Kötzing, Timo; Krejca, Martin S.; Nallaperuma, Samadhi; Neumann, Frank; Schirneck, Martin Fast Building Block Assembly by Majority Vote Crossover. Genetic and Evolutionary Computation Conference (GECCO) 2016: 661668
Different works have shown how crossover can help with building block assembly. Typically, crossover might get lucky to select good building blocks from each parent, but these lucky choices are usually rare. In this work we consider a crossover operator which works on three parent individuals. In each component, the offspring inherits the value present in the majority of the parents; thus, we call this crossover operator majority vote. We show that, if good components are sufficiently prevalent in the individuals, majority vote creates an optimal individual with high probability. Furthermore, we show that this process can be amplified: as long as components are good independently and with probability at least \(1/2+\delta\), we require only \(O(\log 1/\delta + \log \log n)\) successive stages of majority vote to create an optimal individual with high probability! We show how this applies in two scenarios. The first scenario is the Jump test function. With sufficient diversity, we get an optimization time of \(O(n \log n)\) even for jump sizes as large as \(O(n^(1/2\epsilon))\). Our second scenario is a family of vertex cover instances. Majority vote optimizes this family efficiently, while local searches fail and only highly specialized twoparent crossovers are successful.

Gao, Wanru; Friedrich, Tobias; Neumann, Frank FixedParameter Single Objective Search Heuristics for Minimum Vertex Cover. Parallel Problem Solving From Nature (PPSN) 2016: 740750
We consider how wellknown branching approaches for the classical minimum vertex cover problem can be turned into randomized initialization strategies with provable performance guarantees and investigate them by experimental investigations. Furthermore, we show how these techniques can be built into local search components and analyze a basic local search variant that is similar to a stateoftheart approach called NuMVC. Our experimental results for the two local search approaches show that making use of more complex branching strategies in the local search component can lead to better results on various benchmark graphs.

Bläsius, Thomas; Friedrich, Tobias; Schirneck, Martin The Parameterized Complexity of Dependency Detection in Relational Databases. International Symposium on Parameterized and Exact Computation (IPEC) 2016
We study the parameterized complexity of classical problems that arise in the profiling of relational data. Namely, we characterize the complexity of detecting unique column combinations (candidate keys), functional dependencies, and inclusion dependencies with the solution size as parameter. While the discovery of uniques and functional dependencies, respectively, turns out to be W [2]complete, the detection of inclusion dependencies is one of the first natural problems proven to be complete for the class W [3]. As a side effect, our reductions give insights into the complexity of enumerating all minimal unique column combinations or functional dependencies.

Friedrich, Tobias; Kötzing, Timo; Krejca, Martin S.; Sutton, Andrew M. Robustness of Ant Colony Optimization to Noise. Evolutionary Computation 2016: 237254
Recently Ant Colony Optimization (ACO) algorithms have been proven to be efficient in uncertain environments, such as noisy or dynamically changing fitness functions. Most of these analyses focus on combinatorial problems, such as path finding. We analyze an ACO algorithm in a setting where we try to optimize the simple OneMax test function, but with additive posterior noise sampled from a Gaussian distribution. Without noise the classical \((\mu+1)\)EA outperforms any ACO algorithm, with smaller \(\mu\) being better; however, with large noise, the \((\mu+1)\)EA fails, even for high values of \(\mu\) (which are known to help against small noise). In this paper we show that ACO is able to deal with arbitrarily large noise in a graceful manner, that is, as long as the evaporation factor \(p\) is small enough dependent on the parameter \(\delta^2\) of the noise and the dimension \(n\) of the search space \((p = o(1/(n(n + \delta łog n)^2 łog n)))\), optimization will be successful.

Friedrich, Tobias; Kötzing, Timo; Krejca, Martin S.; Sutton, Andrew M. Graceful Scaling on Uniform versus SteepTailed Noise. Parallel Problem Solving From Nature (PPSN) 2016: 761770
Recently, different evolutionary algorithms (EAs) have been analyzed in noisy environments. The most frequently used noise model for this was additive posterior noise (noise added after the fitness evaluation) taken from a Gaussian distribution. In particular, for this setting it was shown that the \((\mu + 1)\)EA on OneMax does not scale gracefully (higher noise cannot efficiently be compensated by higher \(\mu\)). In this paper we want to understand whether there is anything special about the Gaussian distribution which makes the \((\mu + 1)\)EA not scale gracefully. We keep the setting of posterior noise, but we look at other distributions. We see that for exponential tails the \((\mu + 1)\)EA on OneMax does also not scale gracefully, for similar reasons as in the case of Gaussian noise. On the other hand, for uniform distributions (as well as other, similar distributions) we see that the \((\mu + 1)\)EA on OneMax does scale gracefully, indicating the importance of the noise model.

Fountoulakis, Nikolaos; Friedrich, Tobias; Hermelin, Danny On the averagecase complexity of parameterized clique. Theoretical Computer Science 2015: 1829
The \(k\)Clique problem is a fundamental combinatorial problem that plays a prominent role in classical as well as in parameterized complexity theory. It is among the most wellknown NPcomplete and W[1]complete problems. Moreover, its averagecase complexity analysis has created a long thread of research already since the 1970s. Here, we continue this line of research by studying the dependence of the averagecase complexity of the \(k\)Clique problem on the parameter \(k\). To this end, we define two natural parameterized analogs of efficient averagecase algorithms. We then show that \(k\)Clique admits both analogues for Erdős–Rényi random graphs of arbitrary density. We also show that \(k\)Clique is unlikely to admit either of these analogs for some specific computable input distribution.

Friedrich, Tobias; Neumann, Frank Maximizing Submodular Functions under Matroid Constraints by Evolutionary Algorithms. Evolutionary Computation 2015: 543558
Many combinatorial optimization problems have underlying goal functions that are submodular. The classical goal is to find a good solution for a given submodular function \(f\) under a given set of constraints. In this paper, we investigate the runtime of a simple single objective evolutionary algorithm called (1+1) EA and a multiobjective evolutionary algorithm called GSEMO until they have obtained a good approximation for submodular functions. For the case of monotone submodular functions and uniform cardinality constraints we show that the GSEMO achieves a \((11/e)\)approximation in expected polynomial time. For the case of monotone functions where the constraints are given by the intersection of \(k \ge 2\) matroids, we show that the (1+1) EA achieves a \((1 + k/\delta)\)approximation in expected polynomial time for any constant \(\delta > 0\). Turning to nonmonotone symmetric submodular functions with \(k \ge 1\) matroid intersection constraints, we show that the GSEMO achieves a \(1/((k+2)(1+\epsilon)))\)approximation in expected time \(O(n^k+6 \log(n)/\epsilon)\).

Friedrich, Tobias; Neumann, Frank Maximizing Submodular Functions under Matroid Constraints by MultiObjective Evolutionary Algorithms. Evolutionary Computation 2015: 543558
Many combinatorial optimization problems have underlying goal functions that are submodular. The classical goal is to find a good solution for a given submodular function \(f\) under a given set of constraints. In this paper, we investigate the runtime of a multiobjective evolutionary algorithm called GSEMO until it has obtained a good approximation for submodular functions. For the case of monotone submodular functions and uniform cardinality constraints we show that GSEMO achieves a \((1 − 1/e)\)approximation in expected time \(O(n^2(\log n+k)\)), where \(k\) is the value of the given constraint. For the case of nonmonotone submodular functions with \(k\) matroid intersection constraints, we show that GSEMO achieves a \(1/(k + 2 + 1/k + \epsilon)\)approximation in expected time \(O(n^k+5\log(n)/\epsilon)\).

Friedrich, Tobias; Katzmann, Maximilian; Krohmer, Anton Unbounded Discrepancy of Deterministic Random Walks on Grids. International Symposium on Algorithms and Computation (ISAAC) 2015: 212222
Random walks are frequently used in randomized algorithms. We study a derandomized variant of a random walk on graphs, called rotorrouter model. In this model, instead of distributing tokens randomly, each vertex serves its neighbors in a fixed deterministic order. For most setups, both processes behave remarkably similar: Starting with the same initial configuration, the number of tokens in the rotorrouter model deviates only slightly from the expected number of tokens on the corresponding vertex in the random walk model. The maximal difference over all vertices and all times is called single vertex discrepancy. Cooper and Spencer (2006) showed that on \(Z^d\) the single vertex discrepancy is only a constant \(c_d\). Other authors also determined the precise value of \(c_d\) for \(d=1,2\). All these results, however, assume that initially all tokens are only placed on one partition of the bipartite graph \(Z^d\). We show that this assumption is crucial by proving that otherwise the single vertex discrepancy can become arbitrarily large. For all dimensions \(d \ge 1\) and arbitrary discrepancies \(\ell \ge 0\), we construct configurations that reach a discrepancy of at least \(\ell\).

Paixão, Tiago; Badkobeh, Golnaz; Barton, Nick H.; Çörüş, Doğan; Dang, DucCuong; Friedrich, Tobias; Lehre, Per Kristian; Sudholt, Dirk; Sutton, Andrew; Trubenová, Barbora Toward a unifying framework for evolutionary processes. Journal of Theoretical Biology 2015: 2843
The theory of population genetics and evolutionary computation have been evolving separately for nearly 30 years. Many results have been independently obtained in both fields and many others are unique to its respective field. We aim to bridge this gap by developing a unifying framework for evolutionary processes that allows both evolutionary algorithms and population genetics models to be cast in the same formal framework. The framework we present here decomposes the evolutionary process into its several components in order to facilitate the identification of similarities between different models. In particular, we propose a classification of evolutionary operators based on the defining properties of the different components. We cast several commonly used operators from both fields into this common framework. Using this, we map different evolutionary and genetic algorithms to different evolutionary regimes and identify candidates with the most potential for the translation of results between the fields. This provides a unified description of evolutionary processes and represents a stepping stone towards new tools and results to both fields.

Friedrich, Tobias; Krohmer, Anton Parameterized clique on inhomogeneous random graphs. Discrete Applied Mathematics 2015: 130138
Finding cliques in graphs is a classical problem which is in general NPhard and parameterized intractable. In typical applications like social networks or biological networks, however, the considered graphs are scalefree, i.e., their degree sequence follows a power law. Their specific structure can be algorithmically exploited and makes it possible to solve clique much more efficiently. We prove that on inhomogeneous random graphs with \(n\) nodes and power law exponent \(\beta\), cliques of size \(k\) can be found in time \(O(n)\) for \(\beta \ge 3\) and in time \(O(ne^k^4\) for \(2 < \beta < 3\).

Friedrich, Tobias; Hercher, Christian On the kernel size of clique cover reductions for random intersection graphs. Journal of Discrete Algorithms 2015: 128136
Covering all edges of a graph by a minimum number of cliques is a well known NP hard problem. For the parameter \(k\) being the maximal number of cliques to be used, the problem becomes fixed parameter tractable. However, assuming the Exponential Time Hypothesis, there is no kernel of subexponential size in the worstcase. We study the average kernel size for random intersection graphs with \(n\) vertices, edge probability \(p\), and clique covers of size \(k\). We consider the wellknown set of reduction rules of Gramm, Guo, Hüffner, and Niedermeier (2009) 17 and show that with high probability they reduce the graph completely if \(p\) is bounded away from 1 and \(k < c \log n\) for some constant \(c > 0\) . This shows that for large probabilistic graph classes like random intersection graphs the expected kernel size can be substantially smaller than the known exponential worstcase bounds.

Friedrich, Tobias; Wagner, Markus Seeding the initial population of multiobjective evolutionary algorithms: A computational study. Applied Soft Computing 2015: 223230
Most experimental studies initialize the population of evolutionary algorithms with random genotypes. In practice, however, optimizers are typically seeded with good candidate solutions either previously known or created according to some problemspecific method. This seeding has been studied extensively for singleobjective problems. For multiobjective problems, however, very little literature is available on the approaches to seeding and their individual benefits and disadvantages. In this article, we are trying to narrow this gap via a comprehensive computational study on common realvalued test functions. We investigate the effect of two seeding techniques for five algorithms on 48 optimization problems with 2, 3, 4, 6, and 8 objectives. We observe that some functions (e.g., DTLZ4 and the LZ family) benefit significantly from seeding, while others (e.g., WFG) profit less. The advantage of seeding also depends on the examined algorithm.

Friedrich, Tobias; Krohmer, Anton Cliques in hyperbolic random graphs. International Conference on Computer Communications (INFOCOM) 2015: 15441552
Most complex realworld networks display scalefree features. This motivated the study of numerous random graph models with a powerlaw degree distribution. There is, however, no established and simple model which also has a high clustering of vertices as typically observed in real data. Hyperbolic random graphs bridge this gap. This natural model has recently been introduced by Papadopoulos, Krioukov, Boguñá, Vahdat (INFOCOM, pp. 29732981, 2010) and has shown theoretically and empirically to fulfill all typical properties of realworld networks, including powerlaw degree distribution and high clustering. We study cliques in hyperbolic random graphs \(G\) and present new results on the expected number of \(k\)cliques \(E[K_k]\) and the size of the largest clique \(\omega(G)\). We observe that there is a phase transition at powerlaw exponent \(γ = 3\). More precisely, for \(γ \in (2,3)\) we prove \(E[K_k] = n^k(3γ)/2 \Theta(k)^k\) and \(\omega(G) = \Theta(n(3γ)/2)\) while for \(γ \ge 3\) we prove \(E[K_k] = n \Theta(k)^k\) and \(\omega(G) = Θ(\log(n)/\log \log n)\). We empirically compare the \(\omega(G)\) value of several scalefree random graph models with realworld networks. Our experiments show that the \(\omega(G)\)predictions by hyperbolic random graphs are much closer to the data than other scalefree random graph models.

Wagner, Markus; Bringmann, Karl; Friedrich, Tobias; Neumann, Frank Efficient optimization of many objectives by approximationguided evolution. European Journal of Operational Research 2015: 465479
Multiobjective optimization problems arise frequently in applications, but can often only be solved approximately by heuristic approaches. Evolutionary algorithms have been widely used to tackle multiobjective problems. These algorithms use different measures to ensure diversity in the objective space but are not guided by a formal notion of approximation. We present a framework for evolutionary multiobjective optimization that allows to work with a formal notion of approximation. This approximationguided evolutionary algorithm (AGE) has a worstcase runtime linear in the number of objectives and works with an archive that is an approximation of the nondominated objective vectors seen during the run of the algorithm. Our experimental results show that AGE finds competitive or better solutions not only regarding the achieved approximation, but also regarding the total hypervolume. For all considered test problems, even for many (i.e., more than ten) dimensions, AGE discovers a good approximation of the Pareto front. This is not the case for established algorithms such as NSGAII, SPEA2, and SMSEMOA. In this paper we compare AGE with two additional algorithms that use very fast hypervolumeapproximations to guide their search. This significantly speeds up the runtime of the hypervolumebased algorithms, which now allows a comparison of the underlying selection schemes.

Bringmann, Karl; Friedrich, Tobias; Hoefer, Martin; Rothenberger, Ralf; Sauerwald, Thomas UltraFast Load Balancing on ScaleFree Networks. International Colloquium on Automata, Languages and Programming (ICALP) 2015: 516527
The performance of large distributed systems crucially depends on efficiently balancing their load. This has motivated a large amount of theoretical research how an imbalanced load vector can be smoothed with local algorithms. For technical reasons, the vast majority of previous work focuses on regular (or almost regular) graphs including symmetric topologies such as grids and hypercubes, and ignores the fact that large networks are often highly heterogenous. We model large scalefree networks by ChungLu random graphs and analyze a simple local algorithm for iterative load balancing. On nnode graphs our distributed algorithm balances the load within \(O((\log\log n)^2)\) steps. It does not need to know the exponent \(\beta \in (2,3)\) of the powerlaw degree distribution or the weights \(w_i\) of the graph model. To the best of our knowledge, this is the first result which shows that loadbalancing can be done in doublelogarithmic time on realistic graph classes.

Friedrich, Tobias; Krohmer, Anton On the Diameter of Hyperbolic Random Graphs. International Colloquium on Automata, Languages and Programming (ICALP) 2015: 614625
Large realworld networks are typically scalefree. Recent research has shown that such graphs are described best in a geometric space. More precisely, the internet can be mapped to a hyperbolic space such that geometric greedy routing performs close to optimal (Boguná, Papadopoulos, and Krioukov. Nature Communications, 1:62, 2010). This observation pushed the interest in hyperbolic networks as a natural model for scalefree networks. Hyperbolic random graphs follow a powerlaw degree distribution with controllable exponent β and show high clustering (Gugelmann, Panagiotou, and Peter. ICALP, pp. 573585, 2012). For understanding the structure of the resulting graphs and for analyzing the behavior of network algorithms, the next question is bounding the size of the diameter. The only known explicit bound is \(O((\log n)^32/((3−\beta)(5−\beta)))\) (Kiwi and Mitsche. ANALCO, pp. 2639, 2015). We present two much simpler proofs for an improved upper bound of \(O((\log n)^2/(3−\beta))\) and a lower bound of \(\Omega(\log n)\).

Bringmann, Karl; Friedrich, Tobias; Klitzke, Patrick Efficient computation of twodimensional solution sets maximizing the epsilonindicator. Congress on Evolutionary Computation (CEC) 2015: 970977
The majority of empirical comparisons of multiobjective evolutionary algorithms (MOEAs) are performed on synthetic benchmark functions. One of the advantages of synthetic test functions is the apriori knowledge of the optimal Pareto front. This allows measuring the proximity to the optimal front for the solution sets returned by the different MOEAs. Such a comparison is only meaningful if the cardinality of all solution sets is bounded by some fixed \(k\). In order to compare MOEAs to the theoretical optimum achievable with \(k\) solutions, we determine best possible \(\epsilon\)indicator values achievable with solution sets of size \(k\), up to an error of \(\delta\). We present a new algorithm with runtime \(O(k · \log^2(\delta1))\), which is an exponential improvement regarding the dependence on the error \(\delta\) compared to all previous work. We show mathematical correctness of our algorithm and determine optimal solution sets for sets of cardinality \(k \in \2, 3, 4, 5, 10, 20, 50, 100, 1000\\) for the well known test suits DTLZ, ZDT, WFG and LZ09 up to error \(\delta = 10^25\).

Friedrich, Tobias; Neumann, Frank; Thyssen, Christian Multiplicative Approximations, Optimal Hypervolume Distributions, and the Choice of the Reference Point. Evolutionary Computation 2015: 131159
Many optimization problems arising in applications have to consider several objective functions at the same time. Evolutionary algorithms seem to be a very natural choice for dealing with multiobjective problems as the population of such an algorithm can be used to represent the tradeoffs with respect to the given objective functions. In this paper, we contribute to the theoretical understanding of evolutionary algorithms for multiobjective problems. We consider indicatorbased algorithms whose goal is to maximize the hypervolume for a given problem by distributing μ points on the Pareto front. To gain new theoretical insights into the behavior of hypervolumebased algorithms we compare their optimization goal to the goal of achieving an optimal multiplicative approximation ratio. Our studies are carried out for different Pareto front shapes of biobjective problems. For the class of linear fronts and a class of convex fronts, we prove that maximizing the hypervolume gives the best possible approximation ratio when assuming that the extreme points have to be included in both distributions of the points on the Pareto front. Furthermore, we investigate the choice of the reference point on the approximation behavior of hypervolumebased approaches and examine Pareto fronts of different shapes by numerical calculations.

Friedrich, Tobias; Kötzing, Timo; Krejca, Martin S.; Sutton, Andrew M. The Benefit of Recombination in Noisy Evolutionary Search. International Symposium of Algorithms and Computation (ISAAC) 2015: 140150
Practical optimization problems frequently include uncertainty about the quality measure, for example due to noisy evaluations. Thus, they do not allow for a straightforward application of traditional optimization techniques. In these settings metaheuristics are a popular choice for deriving good optimization algorithms, most notably evolutionary algorithms which mimic evolution in nature. Empirical evidence suggests that genetic recombination is useful in uncertain environments because it can stabilize a noisy fitness signal. With this paper we want to support this claim with mathematical rigor. The setting we consider is that of noisy optimization. We study a simple noisy fitness function that is derived by adding Gaussian noise to a monotone function. First, we show that a classical evolutionary algorithm that does not employ sexual recombination (the \((\mu+1)\)EA) cannot handle the noise efficiently, regardless of the population size. Then we show that an evolutionary algorithm which does employ sexual recombination (the Compact Genetic Algorithm, short: cGA) can handle the noise using a graceful scaling of the population.

Doerr, Benjamin; Friedrich, Tobias; Sauerwald, Thomas Quasirandom Rumor Spreading. Transactions on Algorithms 2014: 9:19:35
We propose and analyze a quasirandom analogue of the classical push model for disseminating information in networks (“randomized rumor spreading”). In the classical model, in each round, each informed vertex chooses a neighbor at random and informs it, if it was not informed before. It is known that this simple protocol succeeds in spreading a rumor from one vertex to all others within \(O(\log n)\) rounds on complete graphs, hypercubes, random regular graphs, ErdősRényi random graphs, and Ramanujan graphs with probability \(1 − o(1)\). In the quasirandom model, we assume that each vertex has a (cyclic) list of its neighbors. Once informed, it starts at a random position on the list, but from then on informs its neighbors in the order of the list. Surprisingly, irrespective of the orders of the lists, the abovementioned bounds still hold. In some cases, even better bounds than for the classical model can be shown.

Bringmann, Karl; Friedrich, Tobias; Klitzke, Patrick Twodimensional subset selection for hypervolume and epsilonindicator. Genetic and Evolutionary Computation Conference (GECCO) 2014: 589596
The goal of biobjective optimization is to find a small set of good compromise solutions. A common problem for biobjective evolutionary algorithms is the following subset selection problem (SSP): Given \(n\) solutions \(P \subset R^2\) in the objective space, select \(k\) solutions \(P^*\) from \(P\) that optimize an indicator function. In the hypervolume SSP we want to select \(k\) points \(P^*\) that maximize the hypervolume indicator \(I_HYP(P^*, r)\) for some reference point \(r \in R^2\). Similarly, the \(\epsilon\)indicator SSP aims at selecting \(\tilde k\) points \(P^*\) that minimize the \(\epsilon\)indicator \(I_\epsilon(P^*,R)\) for some reference set \(R \subset R^2\) of size \(m\) (which can be \(R=P\)). We first present a new algorithm for the hypervolume SSP with runtime \(O(n (k + \log n))\). Our second main result is a new algorithm for the \(\epsilon\)indicator SSP with runtime \(O(n \log n + m \log m)\). Both results improve the current state of the art runtimes by a factor of (nearly) \(n\) and make the problems tractable for new applications. Preliminary experiments confirm that the theoretical results translate into substantial empirical runtime improvements.

Friedrich, Tobias; Neumann, Frank Maximizing Submodular Functions under Matroid Constraints by Multiobjective Evolutionary Algorithms. Parallel Problem Solving from Nature (PPSN) 2014: 922931
Many combinatorial optimization problems have underlying goal functions that are submodular. The classical goal is to find a good solution for a given submodular function f under a given set of constraints. In this paper, we investigate the runtime of a multiobjective evolutionary algorithm called GSEMO until it has obtained a good approximation for submodular functions. For the case of monotone submodular functions and uniform cardinality constraints we show that GSEMO achieves a \((1 − 1/e)\)approximation in expected time \(O(n^2(\log n+k))\), where \(k\) is the value of the given constraint. For the case of nonmonotone submodular functions with \(k\) matroid intersection constraints, we show that GSEMO achieves a \(1/(k + 2 + 1/k +\epsilon)\)approximation in expected time \(O(n^k+5\log(n)/\epsilon)\).

Bringmann, Karl; Friedrich, Tobias Convergence of HypervolumeBased Archiving Algorithms. Transactions on Evolutionary Computation 2014: 643657
Multiobjective evolutionary algorithms typically maintain a set of solutions. A crucial part of these algorithms is the archiving, which decides what solutions to keep. A \((\mu + \lambda)\) archiving algorithm defines how to choose in each generation \(\mu\) children from \(\mu\) parents and \(\lambda\) offspring together. We study mathematically the convergence behavior of hypervolumebased archiving algorithms. We distinguish two cases for the offspring generation. A bestcase view leads to a study of the effectiveness of archiving algorithms. It was known that all \((\mu + 1)\)archiving algorithms are ineffective, which means that a set with maximum hypervolume is not necessarily reached. We prove that for \(\lambda < \mu\), all archiving algorithms are ineffective. We also present upper and lower bounds for the achievable hypervolume for different classes of archiving algorithms. On the other hand, a worstcase view on the offspring generation leads to a study of the competitive ratio of archiving algorithms. This measures how much smaller hypervolumes are achieved due to not knowing the future offspring in advance. We present upper and lower bounds on the competitive ratio of different archiving algorithms and present an archiving algorithm, which is the first known computationally efficient archiving algorithm with constant competitive ratio.

Bringmann, Karl; Friedrich, Tobias; Krohmer, Anton Deanonymization of Heterogeneous Random Graphs in Quasilinear Time. European Symposium on Algorithms (ESA) 2014: 197208
There are hundreds of online social networks with billions of users in total. Many such networks publicly release structural information, with all personal information removed. Empirical studies have shown, however, that this provides a false sense of privacy — it is possible to identify almost all users that appear in two such anonymized network as long as a few initial mappings are known. We analyze this problem theoretically by reconciling two versions of an artificial powerlaw network arising from independent subsampling of vertices and edges. We present a new algorithm that identifies most vertices and makes no wrong identifications with high probability. The number of vertices matched is shown to be asymptotically optimal. For an \(n\)vertex graph, our algorithm uses \(n^\epsilon\) seed nodes (for an arbitrarily small \(\epsilon\)) and runs in quasilinear time. This improves previous theoretical results which need \(\Theta(n)\) seed nodes and have runtimes of order \(n^1 + \Omega(1)\). Additionally, the applicability of our algorithm is studied experimentally on different networks.

Bringmann, Karl; Friedrich, Tobias; Klitzke, Patrick Generic Postprocessing via Subset Selection for Hypervolume and EpsilonIndicator. Parallel Problem Solving from Nature (PPSN) 2014: 518527
Most biobjective evolutionary algorithms maintain a population of fixed size \(\mu\) and return the final population at termination. During the optimization process many solutions are considered, but most are discarded. We present two generic postprocessing algorithms which utilize the archive of all nondominated solutions evaluated during the search. We choose the best \(\mu\) solutions from the archive such that the hypervolume or \(\epsilon\)indicator is maximized. This postprocessing costs no additional fitness function evaluations and has negligible runtime compared to most EMOAs. We experimentally examine our postprocessing for four standard algorithms (NSGAII, SPEA2, SMSEMOA, IBEA) on ten standard test functions (DTLZ 1–2,7, ZDT 1–3, WFG 3–6) and measure the average quality improvement. The median decrease of the distance to the optimal \(\epsilon\)indicator is 95%, the median decrease of the distance to the optimal hypervolume value is 86%. We observe similar performance on a realworld problem (wind turbine placement).

Friedrich, Tobias; Kroeger, Trent; Neumann, Frank Weighted preferences in evolutionary multiobjective optimization. Machine Learning and Cybernetics 2013: 139148
Evolutionary algorithms have been widely used to tackle multiobjective optimization problems. Incorporating preference information into the search of evolutionary algorithms for multiobjective optimization is of great importance as it allows one to focus on interesting regions in the objective space. Zitzler et al. have shown how to use a weight distribution function on the objective space to incorporate preference information into hypervolumebased algorithms. We show that this weighted information can easily be used in other popular EMO algorithms as well. Our results for NSGAII and SPEA2 show that this yields similar results to the hypervolume approach and requires less computational effort.

Wagner, Markus; Friedrich, Tobias Efficient parent selection for ApproximationGuided Evolutionary multiobjective optimization. Congress on Evolutionary Computation (CEC) 2013: 18461853
The Pareto front of a multiobjective optimization problem is typically very large and can only be approximated. ApproximationGuided Evolution (AGE) is a recently presented evolutionary multiobjective optimization algorithm that aims at minimizing iteratively the approximation factor, which measures how well the current population approximates the Pareto front. It outperforms stateoftheart algorithms for problems with many objectives. However, AGE's performance is not competitive on problems with very few objectives. We study the reason for this behavior and observe that AGE selects parents uniformly at random, which has a detrimental effect on its performance. We then investigate different algorithmspecific selection strategies for AGE. The main difficulty here is finding a computationally efficient selection scheme which does not harm AGEs linear runtime in the number of objectives. We present several improved selections schemes that are computationally efficient and substantially improve AGE on lowdimensional objective spaces, but have no negative effect in highdimensional objective spaces.

Fellows, Michael R.; Friedrich, Tobias; Hermelin, Danny; Narodytska, Nina; Rosamond, Frances A. Constraint satisfaction problems: Convexity makes AllDifferent constraints tractable. Theoretical Computer Science 2013: 8189
We examine the complexity of constraint satisfaction problems that consist of a set of AllDiff constraints. Such CSPs naturally model a wide range of realworld and combinatorial problems, like scheduling, frequency allocations, and graph coloring problems. As this problem is known to be NPcomplete, we investigate under which further assumptions it becomes tractable. We observe that a crucial property seems to be the convexity of the variable domains and constraints. Our main contribution is an extensive study of the complexity of Multiple AllDiff CSPs for a set of natural parameters, like maximum domain size and maximum size of the constraint scopes. We show that, depending on the parameter, convexity can make the problem tractable even though it is provably intractable in general. Interestingly, the convexity of constraints is the key property in achieving fixed parameter tractability, while the convexity of domains does not usually make the problem easier.

Bringmann, Karl; Friedrich, Tobias Exact and Efficient Generation of Geometric Random Variates and Random Graphs. International Colloquium on Automata, Languages, and Programming (ICALP) 2013: 267278
The standard algorithm for fast generation of ErdősRényi random graphs only works in the Real RAM model. The critical point is the generation of geometric random variates \(Geo(p)\), for which there is no algorithm that is both exact and efficient in any bounded precision machine model. For a RAM model with word size \(w=\Omega(\log\log(1/p))\), we show that this is possible and present an exact algorithm for sampling \(Geo(p)\) in optimal expected time \(O(1 + \log(1/p) / w)\). We also give an exact algorithm for sampling \(\minn, Geo(p)\) in optimal expected time \(O(1 + \log(\min\1/p,n\)/w)\). This yields a new exact algorithm for sampling ErdősRényi and ChungLu random graphs of \(n\) vertices and \(m\) (expected) edges in optimal expected runtime \(O(n + m)\) on a RAM with word size \(w=\Theta(\log n)\).

Anand, S.; Bringmann, Karl; Friedrich, Tobias; Garg, Naveen; Kumar, Amit Minimizing Maximum (Weighted) FlowTime on Related and Unrelated Machines. International Colloquium on Automata, Languages and Programming (ICALP) 2013: 1324
In this paper we initiate the study of job scheduling on related and unrelated machines so as to minimize the maximum flow time or the maximum weighted flow time (when each job has an associated weight). Previous work for these metrics considered only the setting of parallel machines, while previous work for scheduling on unrelated machines only considered \(L_p, p < \infty\) norms. Our main results are: (1) we give an \(O(\epsilon^−3)\)competitive algorithm to minimize maximum weighted flow time on related machines where we assume that the machines of the online algorithm can process \(1+\epsilon\) units of a job in 1 timeunit (\(\epsilon\) speed augmentation). (2) For the objective of minimizing maximum flow time on unrelated machines we give a simple \(2/\epsilon\)competitive algorithm when we augment the speed by \(\epsilon\). For \(m\) machines we show a lower bound of \(\Omega(m)\) on the competitive ratio if speed augmentation is not permitted. Our algorithm does not assign jobs to machines as soon as they arrive. To justify this “drawback” we show a lower bound of \(\Omega(\log m)\) on the competitive ratio of immediate dispatch algorithms. In both these lower bound constructions we use jobs whose processing times are in \(\1,\infty\\), and hence they apply to the more restrictive subset parallel setting. (3) For the objective of minimizing maximum weighted flow time on unrelated machines we establish a lower bound of \(\Omega(\log m)\)on the competitive ratio of any online algorithm which is permitted to use \(s = O(1)\) speed machines. In our lower bound construction, job j has a processing time of \(p_j\) on a subset of machines and infinity on others and has a weight \(1/p_j\). Hence this lower bound applies to the subset parallel setting for the special case of minimizing maximum stretch.

Bringmann, Karl; Friedrich, Tobias; Igel, Christian; Voß, Thomas Speeding up manyobjective optimization by Monte Carlo approximations. Artificial Intelligence 2013: 2229
Many stateoftheart evolutionary vector optimization algorithms compute the contributing hypervolume for ranking candidate solutions. However, with an increasing number of objectives, calculating the volumes becomes intractable. Therefore, although hypervolumebased algorithms are often the method of choice for bicriteria optimization, they are regarded as not suitable for manyobjective optimization. Recently, Monte Carlo methods have been derived and analyzed for approximating the contributing hypervolume. Turning theory into practice, we employ these results in the ranking procedure of the multiobjective covariance matrix adaptation evolution strategy (MOCMAES) as an example of a stateoftheart method for vector optimization. It is empirically shown that the approximation does not impair the quality of the obtained solutions given a budget of objective function evaluations, while considerably reducing the computation time in the case of multiple objectives. These results are obtained on common benchmark functions as well as on two design optimization tasks. Thus, employing Monte Carlo approximations makes hypervolumebased algorithms applicable to manyobjective optimization.

Bringmann, Karl; Friedrich, Tobias Approximation quality of the hypervolume indicator. Artificial Intelligence 2013: 265290
In order to allow a comparison of (otherwise incomparable) sets, many evolutionary multiobjective optimizers use indicator functions to guide the search and to evaluate the performance of search algorithms. The most widely used indicator is the hypervolume indicator. It measures the volume of the dominated portion of the objective space bounded from below by a reference point. Though the hypervolume indicator is very popular, it has not been shown that maximizing the hypervolume indicator of sets of bounded size is indeed equivalent to the overall objective of finding a good approximation of the Pareto front. To address this question, we compare the optimal approximation ratio with the approximation ratio achieved by twodimensional sets maximizing the hypervolume indicator. We bound the optimal multiplicative approximation ratio of \(n\) points by \(1+\Theta(1/n)\) for arbitrary Pareto fronts. Furthermore, we prove that the same asymptotic approximation ratio is achieved by sets of \(n\) points that maximize the hypervolume indicator. However, there is a provable gap between the two approximation ratios which is even exponential in the ratio between the largest and the smallest value of the front. We also examine the additive approximation ratio of the hypervolume indicator in two dimensions and prove that it achieves the optimal additive approximation ratio apart from a small ratio.

Vladislavleva, Ekaterina; Friedrich, Tobias; Neumann, Frank; Wagner, Markus Predicting the Energy Output of Wind Farms Based on Weather Data: Important Variables and their Correlation. Renewable Energy 2013: 236243
Wind energy plays an increasing role in the supply of energy world wide. The energy output of a wind farm is highly dependent on the weather conditions present at its site. If the output can be predicted more accurately, energy suppliers can coordinate the collaborative production of different energy sources more efficiently to avoid costly overproduction. In this paper, we take a computer science perspective on energy prediction based on weather data and analyze the important parameters as well as their correlation on the energy output. To deal with the interaction of the different parameters, we use symbolic regression based on the genetic programming tool DataModeler. Our studies are carried out on publicly available weather and energy data for a wind farm in Australia. We report on the correlation of the different variables for the energy output. The model obtained for energy prediction gives a very reliable prediction of the energy output for newly supplied weather data.

Bringmann, Karl; Friedrich, Tobias Parameterized averagecase complexity of the hypervolume indicator. Genetic and Evolutionary Computation Conference (GECCO) 2013: 575582
The hypervolume indicator (HYP) is a popular measure for the quality of a set of \(n\) solutions in \(R^d\). We discuss its asymptotic worstcase runtimes and several lower bounds depending on different complexitytheoretic assumptions. Assuming that P \(\neq\) NP, there is no algorithm with runtime \(poly(n,d)\). Assuming the exponential time hypothesis, there is no algorithm with runtime \(n^o(d)\). In contrast to these worstcase lower bounds, we study the averagecase complexity of HYP for points distributed i.i.d. at random on a \(d\)dimensional simplex. We present a general framework which translates any algorithm for HYP with worstcase runtime \(n^f(d)\) to an algorithm with worstcase runtime \(n^f(d) +1\) and fixedparametertractable (FPT) averagecase runtime. This can be used to show that HYP can be solved in expected time \(O(d^d^2/2 n + d n^2)\), which implies that HYP is FPT on average while it is W1hard in the worstcase. For constant dimension \(d\) this gives an algorithm for HYP with runtime \(O(n^2)\) on average. This is the first result proving that HYP is asymptotically easier in the average case. It gives a theoretical explanation why most HYP algorithms perform much better on average than their theoretical worstcase runtime predicts.

Doerr, Benjamin; Fouz, Mahmoud; Friedrich, Tobias Asynchronous Rumor Spreading in Preferential Attachment Graphs. Scandinavian Symposium and Workshops on Algorithm Theory (SWAT) 2012: 307315
We show that the asynchronous pushpull protocol spreads rumors in preferential attachment graphs (as defined by Barabási and Albert) in time \(O(\sqrt\log n)\) to all but a lower order fraction of the nodes with high probability. This is significantly faster than what synchronized protocols can achieve; an obvious lower bound for these is the average distance, which is known to be \(\Theta(\log n / \log\log n)\).

Doerr, Benjamin; Fouz, Mahmoud; Friedrich, Tobias Experimental Analysis of Rumor Spreading in Social Networks. Mediterranean Conference on Algorithms (MedAlg) 2012: 159173
Randomized rumor spreading was recently shown to be a very efficient mechanism to spread information in preferential attachment networks. Most interesting from the algorithm design point of view was the observation that the asymptotic runtime drops when memory is used to avoid recontacting neighbors within a small number of rounds. In this experimental investigation, we confirm that a small amount of memory indeed reduces the runtime of the protocol even for small network sizes. We observe that one memory cell per node suffices to reduce the runtime significantly; more memory helps comparably little. Aside from extremely sparse graphs, preferential attachment graphs perform faster than all other graph classes examined. This holds independent of the amount of memory, but preferential attachment graphs benefit the most from the use of memory. We also analyze the influence of the network density and the size of the memory. For the asynchronous version of the rumor spreading protocol, we observe that the theoretically predicted asymptotic advantage of preferential attachment graphs is smaller than expected. There are other topologies which benefit even more from asynchrony. We complement our findings on artificial network models by the corresponding experiments on crawls of popular online social networks, where again we observe extremely rapid information dissemination and a sizable benefit from using memory and asynchrony.

Friedrich, Tobias; Gairing, Martin; Sauerwald, Thomas Quasirandom Load Balancing. SIAM Journal of Computing 2012: 747771
We propose a simple distributed algorithm for balancing indivisible tokens on graphs. The algorithm is completely deterministic, though it tries to imitate (and enhance) a randomized algorithm by keeping the accumulated rounding errors as small as possible. Our new algorithm, surprisingly, closely approximates the idealized process (where the tokens are divisible) on important network topologies. On \(d\)dimensional torus graphs with \(n\) nodes it deviates from the idealized process only by an additive constant. In contrast, the randomized rounding approach of Friedrich and Sauerwald [Proceedings of the 41st Annual ACM Symposium on Theory of Computing, 2009, pp. 121–130] can deviate up to \(\Omega(polylog(n))\), and the deterministic algorithm of Rabani, Sinclair, and Wanka [Proceedings of the 39th Annual IEEE Symposium on Foundations of Computer Science, 1998, pp. 694–705] has a deviation of \(\Omega(n^1/d)\). This makes our quasirandom algorithm the first known algorithm for this setting, which is optimal both in time and achieved smoothness. We further show that on the hypercube as well, our algorithm has a smaller deviation from the idealized process than the previous algorithms. To prove these results, we derive several combinatorial and probabilistic results that we believe to be of independent interest. In particular, we show that firstpassage probabilities of a random walk on a path with arbitrary weights can be expressed as a convolution of independent geometric probability distributions.

Bringmann, Karl; Friedrich, Tobias Approximating the least hypervolume contributor: NPhard in general, but fast in practice. Theoretical Computer Science (TCS) 2012: 104116
The hypervolume indicator is an increasingly popular set measure to compare the quality of two Pareto sets. The basic ingredient of most hypervolume indicator based optimization algorithms is the calculation of the hypervolume contribution of single solutions regarding a Pareto set. We show that exact calculation of the hypervolume contribution is #Phard while its approximation is NPhard. The same holds for the calculation of the minimal contribution. We also prove that it is NPhard to decide whether a solution has the least hypervolume contribution. Even deciding whether the contribution of a solution is at most \((1 +\epsilon)\) times the minimal contribution is NPhard. This implies that it is neither possible to efficiently find the least contributing solution (unless P = NP) nor to approximate it (unless NP = BPP). Nevertheless, in the second part of the paper we present a very fast approximation algorithm for this problem. We prove that for arbitrarily given \(\epsilon, \delta > 0\) it calculates a solution with contribution at most \((1 +\epsilon)\) times the minimal contribution with probability at least \((1 −\delta)\). Though it cannot run in polynomial time for all instances, it performs extremely fast on various benchmark datasets. The algorithm solves very large problem instances which are intractable for exact algorithms (e.g., 10000 solutions in 100 dimensions) within a few seconds.

Baumbach, Jan; Friedrich, Tobias; Kötzing, Timo; Krohmer, Anton; Müller, Joachim; Pauling, Josch Efficient Algorithms for Extracting Biological Key Pathways with Global Constraints. Genetic and Evolutionary Computation Conference (GECCO) 2012: 169176
The integrated analysis of data of different types and with various interdependencies is one of the major challenges in computational biology. Recently, we developed KeyPathwayMiner, a method that combines biological networks modeled as graphs with diseasespecific genetic expression data gained from a set of cases (patients, cell lines, tis sues, etc.). We aimed for finding all maximal connected subgraphs where all nodes but K are expressed in all cases but at most L, i.e. key pathways. Thereby, we combined biological networks with OMICS data, instead of analyzing these data sets in isolation. Here we present an alternative approach that avoids a certain bias towards hub nodes: We now aim for extracting all maximal connected subnetworks where all but at most K nodes are expressed in all cases but in total (!) at most L, i.e. accumulated over all cases and all nodes in a solution. We call this strategy GLONE (global node exceptions); the previous problem we call INES (individual node exceptions). Since finding GLONEcomponents is computationally hard, we developed an Ant Colony Optimization algorithm and implemented it with the KeyPathwayMiner Cytoscape framework as an alternative to the INES algorithms. KeyPathwayMiner 3.0 now offers both the INES and the GLONE algorithms. It is available as plugin from Cytoscape and online at http://keypathwayminer.mpiinf.mpg.de.

Alcaraz, Nicolas; Friedrich, Tobias; Kötzing, Timo; Krohmer, Anton; Müller, Joachim; Pauling, Josch; Baumbach, Jan Efficient Key Pathway Mining: Combining Networks and OMICS Data. Integrative Biology 2012: 756764
Systems biology has emerged over the last decade. Driven by the advances in sophisticated measurement technology the research community generated huge molecular biology data sets. This comprises rather static data on the interplay of biological entities, for instance proteinprotein interaction network data, as well as quite dynamic data collected for studying the behavior of individual cells or tissues in accordance to changing environmental conditions, such as DNA microarrays or RNA sequencing. Here we bring the two different data types together in order to gain higher level knowledge. We introduce a significantly improved version of the KeyPathwayMiner software framework. Given a biological network modelled as graph and a set of expression studies, KeyPathwayMiner efficiently finds and visualizes connected subnetworks where most components are expressed in most cases. It finds all maximal connected subnetworks where all nodes but k exceptions are expressed in all experimental studies but at least l exceptions. We demonstrate the power of the new approach by comparing it to similar approaches with gene expression data previously used to study Huntington’s disease. In addition, we demonstrate KeyPathwayMiner’s flexibility and applicability to nonarray data by analyzing genomescale DNA methylation profiles from colorectal tumor cancer patients. KeyPathwayMiner release 2 is available as a Cytoscape plugin and online at http://keypathwayminer.mpiinf.mpg.de.

Doerr, Benjamin; Fouz, Mahmoud; Friedrich, Tobias Why rumors spread so quickly in social networks. Communications of the ACM 2012: 7075
Understanding structural and algorithmic properties of complex networks is an important task, not least because of the huge impact of the internet. Our focus is to analyze how news spreads in social networks. We simulate a simple information spreading process in different network topologies and demonstrate that news spreads much faster in existing social network topologies. We support this finding by analyzing information spreading in the mathematically defined preferential attachment network topology, which is a common model for realworld networks. We prove that here a sublogarithmic time suffices to spread a news to all nodes of the network. All previously studied network topologies need at least a logarithmic time. Surprisingly, we observe that nodes with few neighbors are crucial for the fast dissemination. Social networks like Facebook and Twitter are reshaping the way people take collective actions. They have played a crucial role in the recent uprisings of the ‘Arab Spring’ and the ‘London riots’. It has been argued that the ‘instantaneous nature’ of these networks influenced the speed at which the events were unfolding [4]. It is quite remarkable that social networks spread news so fast. Both the structure of social networks and the process that distributes the news are not designed with this purpose in mind. On the contrary, they are not designed at all, but have evolved in a random and decentralized manner. So is our view correct that social networks ease the spread of information (“rumors”), and if so, what particular properties of social networks are the reason for this? To answer these questions, we simulate a simple rumor spreading process on several graphs having the structure of existing large social networks. We see, for example, that a rumor started at a random node of the Twitter network in average reaches 45.6 million of the total of 51.2 million members within only eight rounds of communication. We also analyze this process on an abstract model of social networks, the socalled preferential attachment graphs introduced by Baraba ́si and Albert [3]. In [17], we obtain a mathematical proof that rumors in such networks spread much faster than in many other network topologies—even faster than in networks having a communication link between any two nodes (complete graphs). As an explanation, we observe that nodes of small degree build a shortcut between those having large degree (hubs), which due to their large number of possible communication partners less often talk to each other directly.

Friedrich, Tobias; Krohmer, Anton Parameterized Clique on ScaleFree Networks. International Symposium on Algorithms and Computation (ISAAC) 2012: 659668
Finding cliques in graphs is a classical problem which is in general NPhard and parameterized intractable. However, in typical applications like social networks or proteinprotein interaction networks, the considered graphs are scalefree, i.e., their degree sequence follows a power law. Their specific structure can be algorithmically exploited and makes it possible to solve clique much more efficiently. We prove that on inhomogeneous random graphs with \(n\) nodes and power law exponent \(\gamma\), cliques of size \(k\) can be found in time \(O(n^2)\) for \(\gamma \ge 3\) and in time \(O(n exp(k^4))\) for \(2<\gamma <3\).

Berghammer, Rudolf; Friedrich, Tobias; Neumann, Frank Convergence of SetBased MultiObjective Optimization, Indicators and Deteriorative Cycles. Theoretical Computer Science 2012: 217
Multiobjective optimization deals with the task of computing a set of solutions that represents possible tradeoffs with respect to a given set of objective functions. Setbased approaches such as evolutionary algorithms are very popular for solving multiobjective optimization problems. Convergence of setbased approaches for multiobjective optimization is essential for their success. We take an ordertheoretic view on the convergence of setbased multiobjective optimization and examine how the use of indicator functions can help to direct the search towards Pareto optimal sets. In doing so, we point out that setbased multiobjective optimization working on the dominance relation of search points has to deal with a cyclic behavior that may lead to worsening with respect to the Paretodominance relation defined on sets. Later on, we show in which situations wellknown binary and unary indicators can help to avoid this cyclic behavior and therefore guarantee convergence of the algorithm. We also study the impact of deteriorative cycles on the runtime behavior and give an example in which they provably slow down the optimization process.

Bringmann, Karl; Friedrich, Tobias Convergence of hypervolumebased archiving algorithms ii: competitiveness. Genetic and Evolutionary Computation Conference (GECCO) 2012: 457464
We study the convergence behavior of \((\mu+\lambda)\)archiving algorithms. A \((\mu+\lambda)\)archiving algorithm defines how to choose in each generation \(\mu\) children from \(\mu\) parents and \(\lambda\) offspring together. Archiving algorithms have to choose individuals online without knowing future offspring. Previous studies assumed the offspring generation to be bestcase. We assume the initial population and the offspring generation to be worstcase and use the competitive ratio to measure how much smaller hypervolumes an archiving algorithm finds due to not knowing the future in advance. We prove that all archiving algorithms which increase the hypervolume in each step (if they can) are only \(\mu\)competitive. We also present a new archiving algorithm which is \((4+2/\mu)\)competitive. This algorithm not only achieves a constant competitive ratio, but is also efficiently computable. Both properties provably do not hold for the commonly used greedy archiving algorithms, for example those used in SIBEA, SMSEMOA, or the generational MOCMAES.

Friedrich, Tobias; Sauerwald, Thomas; Stauffer, Alexandre Diameter and Broadcast Time of Random Geometric Graphs in Arbitrary Dimensions. International Symposium on Algorithms and Computation (ISAAC) 2011: 190199
A random geometric graph (RGG) is defined by placing \(n\) points uniformly at random in \([0,n^1/d]^d\), and joining two points by an edge whenever their Euclidean distance is at most some fixed \(r\). We assume that \(r\) is larger than the critical value for the emergence of a connected component with \(\Omega(n)\) nodes. We show that, with high probability (w.h.p.), for any two connected nodes with a Euclidean distance of \(\omega(\log n / r^d−1)\), their graph distance is only a constant factor larger than their Euclidean distance. This implies that the diameter of the largest connected component is \(\Theta(n^1/d/r)\) w.h.p. We also prove that the condition on the Euclidean distance above is essentially tight. We also analyze the following randomized broadcast algorithm on RGGs. At the beginning, only one node from the largest connected component of the RGG is informed. Then, in each round, each informed node chooses a neighbor independently and uniformly at random and informs it. We prove that w.h.p. this algorithm informs every node in the largest connected component of an RGG within \(\Theta(n^1/d/r+\log n)\) rounds.

Friedrich, Tobias; Horoba, Christian; Neumann, Frank Illustration of Fairness in Evolutionary MultiObjective Optimization. Theoretical Computer Science 2011: 15461556
It is widely assumed that evolutionary algorithms for multiobjective optimization problems should use certain mechanisms to achieve a good spread over the Pareto front. In this paper, we examine such mechanisms from a theoretical point of view and analyze simple algorithms incorporating the concept of fairness. This mechanism tries to balance the number of o spring of all individuals in the current population. We rigorously analyze the runtime behavior of different fairness mechanisms and present showcase examples to point out situations, where the right mechanism can speed up the optimization process significantly. We also indicate drawbacks for the use of fairness by presenting instances, where the optimization process is slowed down drastically.

Bringmann, Karl; Friedrich, Tobias Convergence of hypervolumebased archiving algorithms I: effectiveness. Genetic and Evolutionary Computation Conference (GECCO) 2011: 745752
The core of hypervolumebased multiobjective evolutionary algorithms is an archiving algorithm which performs the environmental selection. A \((\mu+\lambda)\)archiving algorithm defines how to choose \(\mu\) children from \(\mu\) parents and \(\lambda\) offspring together. We study theoretically \((\mu+\lambda)\)archiving algorithms which never decrease the hypervolume from one generation to the next. Zitzler, Thiele, and Bader (IEEE Trans. Evolutionary Computation, 14:5879, 2010) proved that all \((\mu+1)\)archiving algorithms are ineffective, which means there is an initial population such that independent of the used reproduction rule, a set with maximum hypervolume cannot be reached. We extend this and prove that for \(\lambda < \mu\) all archiving algorithms are ineffective. On the other hand, locally optimal algorithms, which maximize the hypervolume in each step, are effective for \(\lambda = \mu\) and can always find a population with hypervolume at least half the optimum for \(\lambda < \mu\). We also prove that there is no hypervolumebased archiving algorithm which can always find a population with hypervolume greater than \(1 / (1 + 0.1338, ( 1/\lambda  1/\mu) )\) times the optimum.

Friedrich, Tobias; Kroeger, Trent; Neumann, Frank Weighted Preferences in Evolutionary Multiobjective Optimization. Australasian Conference on Artificial Intelligence (AUSAI) 2011: 291300
Evolutionary algorithms have been widely used to tackle multiobjective optimization problems. Incorporating preference information into the search of evolutionary algorithms for multiobjective optimization is of great importance as it allows one to focus on interesting regions in the objective space. Zitzler et al. have shown how to use a weight distribution function on the objective space to incorporate preference information into hypervolumebased algorithms. We show that this weighted information can easily be used in other popular EMO algorithms as well. Our results for NSGAII and SPEA2 show that this yields similar results to the hypervolume approach and requires less computational effort.

Doerr, Benjamin; Fouz, Mahmoud; Friedrich, Tobias Social networks spread rumors in sublogarithmic time. Symposium on Theory of Computing (STOC) 2011: 2130
With the prevalence of social networks, it has become increasingly important to understand their features and limitations. It has been observed that information spreads extremely fast in social networks. We study the performance of randomized rumor spreading protocols on graphs in the preferential attachment model. The wellknown random phone call model of Karp et al. (FOCS 2000) is a pushpull strategy where in each round, each vertex chooses a random neighbor and exchanges information with it. We prove the following.  The pushpull strategy delivers a message to all nodes within \(\Theta(\log n)\) rounds with high probability. The best known bound so far was \(O(\log^2 n)\).  If we slightly modify the protocol so that contacts are chosen uniformly from all neighbors but the one contacted in the previous round, then this time reduces to \(\Theta(\log n / \log \log n)\), which is the diameter of the graph. This is the first time that a sublogarithmic broadcast time is proven for a natural setting. Also, this is the first time that avoiding doublecontacts reduces the runtime to a smaller order of magnitude.

Friedrich, Tobias; Sauerwald, Thomas; Vilenchik, Dan Smoothed analysis of balancing networks. Random Structures and Algorithms 2011: 115138
In a balancing network each processor has an initial collection of unitsize jobs (tokens) and in each round, pairs of processors connected by balancers split their load as evenly as possible. An excess token (if any) is placed according to some predefined rule. As it turns out, this rule crucially affects the performance of the network. In this work we propose a model that studies this effect. We suggest a model bridging the uniformlyrandom assignment rule, and the arbitrary one (in the spirit of smoothedanalysis). We start with an arbitrary assignment of balancer directions and then flip each assignment with probability \(\alpha\) independently. For a large class of balancing networks our result implies that after \(O(\log n)\) rounds the discrepancy is \(O((1/2 − \alpha) \log n + \log \log n)\) with high probability. This matches and generalizes known upper bounds for \(\alpha = 0\) and \(\alpha = 1/2\). We also show that a natural network matches the upper bound for any \(\alpha\).

Friedrich, Tobias; Hebbinghaus, Nils Average update times for fullydynamic allpairs shortest paths. Discrete Applied Mathematics 2011: 17511758
We study the fullydynamic all pairs shortest path problem for graphs with arbitrary nonnegative edge weights. It is known for digraphs that an update of the distance matrix costs \(O(n^2.75)\) worstcase time Thorup, STOC ’05 and \(O(n^2)\) amortized time Demetrescu and Italiano, J.ACM ’04 where \(n\) is the number of vertices. We present the first averagecase analysis of the undirected problem. For a random update we show that the expected time per update is bounded by \(O(n^4/3 + \epsilon)\) for all \(\epsilon > 0\).

Fellows, Michael R.; Friedrich, Tobias; Hermelin, Danny; Narodytska, Nina; Rosamond, Frances A. Constraint Satisfaction Problems: Convexity Makes AllDifferent Constraints Tractable. International Joint Conference on Artificial Intelligence (IJCAI) 2011: 522527
We examine the complexity of constraint satisfaction problems that consist of a set of AllDiff constraints. Such CSPs naturally model a wide range of realworld and combinatorial problems, like scheduling, frequency allocations, and graph coloring problems. As this problem is known to be NPcomplete, we investigate under which further assumptions it becomes tractable. We observe that a crucial property seems to be the convexity of the variable domains and constraints. Our main contribution is an extensive study of the complexity of Multiple AllDiff CSPs for a set of natural parameters, like maximum domain size and maximum size of the constraint scopes. We show that, depending on the parameter, convexity can make the problem tractable even though it is provably intractable in general. Interestingly, the convexity of constraints is the key property in achieving fixed parameter tractability, while the convexity of domains does not usually make the problem easier.

Friedrich, Tobias; Bringmann, Karl; Voß, Thomas; Igel, Christian The logarithmic hypervolume indicator. Foundations of Genetic Algorithms (FOGA) 2011: 8192
It was recently proven that sets of points maximizing the hypervolume indicator do not give a good multiplicative approximation of the Pareto front. We introduce a new "logarithmic hypervolume indicator" and prove that it achieves a closetooptimal multiplicative approximation ratio. This is experimentally verified on several benchmark functions by comparing the approximation quality of the multiobjective covariance matrix evolution strategy (MOCMAES) with the classic hypervolume indicator and the MOCMAES with the logarithmic hypervolume indicator.

Berenbrink, Petra; Cooper, Colin; Friedetzky, Tom; Friedrich, Tobias; Sauerwald, Thomas Randomized Diffusion for Indivisible Loads. Symposium on Discrete Algorithms (SODA) 2011: 429439
We present a new randomized diffusionbased algorithm for balancing indivisible tasks (tokens) on a network. Our aim is to minimize the discrepancy between the maximum and minimum load. The algorithm works as follows. Every vertex distributes its tokens as evenly as possible among its neighbors and itself. If this is not possible without splitting some tokens, the vertex redistributes its excess tokens among all its neighbors randomly (without replacement). In this paper we prove several upper bounds on the load discrepancy for general networks. These bounds depend on some expansion properties of the network, that is, the second largest eigenvalue, and a novel measure which we refer to as refined local divergence. We then apply these general bounds to obtain results for some specific networks. For constantdegree expanders and torus graphs, these yield exponential improvements on the discrepancy bounds compared to the algorithm of Rabani, Sinclair, and Wanka [14]. For hypercubes we obtain a polynomial improvement. In contrast to previous papers, our algorithm is vertexbased and not edgebased. This means excess tokens are assigned to vertices instead to edges, and the vertex reallocates all of its excess tokens by itself. This approach avoids nodes having "negative loads" (like in [8, 10]), but causes additional dependencies for the analysis.

Bringmann, Karl; Friedrich, Tobias; Neumann, Frank; Wagner, Markus ApproximationGuided Evolutionary MultiObjective Optimization. International Joint Conference on Artificial Intelligence (IJCAI) 2011: 11981203
Multiobjective optimization problems arise frequently in applications but can often only be solved approximately by heuristic approaches. Evolutionary algorithms have been widely used to tackle multiobjective problems. These algorithms use different measures to ensure diversity in the objective space but are not guided by a formal notion of approximation. We present a new framework of an evolutionary algorithm for multiobjective optimization that allows to work with a formal notion of approximation. Our experimental results show that our approach outperforms stateoftheart evolutionary algorithms in terms of the quality of the approximation that is obtained in particular for problems with many objectives.

Friedrich, Tobias; Levine, Lionel Fast Simulation of LargeScale Growth Models. Approximation Algorithms for Combinatorial Optimization (APPROX) 2011: 555566
We give an algorithm that computes the final state of certain growth models without computing all intermediate states. Our technique is based on a “least action principle” which characterizes the odometer function of the growth process. Starting from an approximation for the odometer, we successively correct under and overestimates and provably arrive at the correct final state. The degree of speedup depends on the accuracy of the initial guess. Determining the size of the boundary fluctuations in growth models like internal diffusionlimited aggregation (IDLA) is a longstanding open problem in statistical physics. As an application of our method, we calculate the size of fluctuations over two orders of magnitude beyond previous simulations.

Berghammer, Rudolf; Friedrich, Tobias; Neumann, Frank Setbased multiobjective optimization, indicators, and deteriorative cycles. Genetic and Evolutionary Computation Conference (GECCO) 2010: 495502
Evolutionary multiobjective optimization deals with the task of computing a minimal set of search points according to a given set of objective functions. The task has been made explicit in a recent paper by Zitzler et al. [13]. We take an ordertheoretic view on this task and examine how the use of indicator functions can help to direct the search towards Pareto optimal sets. Thereby, we point out that evolutionary algorithms for multiobjective optimization working on the dominance relation of search points have to deal with a cyclic behavior that may lead to worsenings with respect to the Paretodominance relation defined on sets. Later on, we point out in which situations wellknown binary and unary indicators can help to avoid this cyclic behavior.

Bringmann, Karl; Friedrich, Tobias Tight Bounds for the Approximation Ratio of the Hypervolume Indicator. Parallel Problem Solving from Nature (PPSN) 2010: 607616
The hypervolume indicator is widely used to guide the search and to evaluate the performance of evolutionary multiobjective optimization algorithms. It measures the volume of the dominated portion of the objective space which is considered to give a good approximation of the Pareto front. There is surprisingly little theoretically known about the quality of this approximation. We examine the multiplicative approximation ratio achieved by twodimensional sets maximizing the hypervolume indicator and prove that it deviates significantly from the optimal approximation ratio. This provable gap is even exponential in the ratio between the largest and the smallest value of the front. We also examine the additive approximation ratio of the hypervolume indicator and prove that it achieves the optimal additive approximation ratio apart from a small factor \(\le n/(n − 2)\), where \(n\) is the size of the population. Hence the hypervolume indicator can be used to achieve a very good additive but not a good multiplicative approximation of a Pareto front.

Bradonjic, Milan; Elsässer, Robert; Friedrich, Tobias; Sauerwald, Thomas; Stauffer, Alexandre Efficient Broadcast on Random Geometric Graphs. Symposium on Discrete Algorithms (SODA) 2010: 14121421
A Random Geometric Graph (RGG) in two dimensions is constructed by distributing \(n\) nodes independently and uniformly at random in \([0, \sqrt n]^2\) and creating edges between every pair of nodes having Euclidean distance at most \(r\), for some prescribed \(r\). We analyze the following randomized broadcast algorithm on RGGs. At the beginning, only one node from the largest connected component of the RGG is informed. Then, in each round, each informed node chooses a neighbor independently and uniformly at random and informs it. We prove that with probability \(1  O(n^1)\) this algorithm informs every node in the largest connected component of an RGG within \(O(\sqrt n / r + \log n)\) rounds. This holds for any value of \(r\) larger than the critical value for the emergence of a connected component with \(\Omega(n)\) nodes. In order to prove this result, we show that for any two nodes sufficiently distant from each other in \([0, \sqrt n]^2\), the length of the shortest path between them in the RGG, when such a path exists, is only a constant factor larger than the optimum. This result has independent interest and, in particular, gives that the diameter of the largest connected component of an RGG is \(\Theta(\sqrt n / r\)), which surprisingly has been an open problem so far.

Friedrich, Tobias; He, Jun; Hebbinghaus, Nils; Neumann, Frank; Witt, Carsten Approximating Covering Problems by Randomized Search Heuristics Using MultiObjective Models. Evolutionary Computation 2010: 617633
The main aim of randomized search heuristics is to produce good approximations of optimal solutions within a small amount of time. In contrast to numerous experimental results, there are only a few theoretical explorations on this subject. We consider the approximation ability of randomized search heuristics for the class of covering problems and compare singleobjective and multiobjective models for such problems. For the VertexCover problem, we point out situations where the multiobjective model leads to a fast construction of optimal solutions while in the singleobjective case, no good approximation can be achieved within the expected polynomial time. Examining the more general SetCover problem, we show that optimal solutions can be approximated within a logarithmic factor of the size of the ground set, using the multiobjective approach, while the approximation quality obtainable by the singleobjective approach in expected polynomial time may be arbitrarily bad.

Bringmann, Karl; Friedrich, Tobias Approximating the volume of unions and intersections of highdimensional geometric objects. Computational Geometry 2010: 601610
We consider the computation of the volume of the union of highdimensional geometric objects. While showing that this problem is #Phard already for very simple bodies (i.e., axisparallel boxes), we give a fast FPRAS for all objects where one can: (1) test whether a given point lies inside the object, (2) sample a point uniformly, (3) calculate the volume of the object in polynomial time. All three oracles can be weak, that is, just approximate. This implies that Klee's measure problem and the hypervolume indicator can be approximated efficiently even though they are #Phard and hence cannot be solved exactly in time polynomial in the number of dimensions unless P=NP. Our algorithm also allows to approximate efficiently the volume of the union of convex bodies given by weak membership oracles. For the analogous problem of the intersection of highdimensional geometric objects we prove #Phardness for boxes and show that there is no multiplicative polynomialtime \(2^d^1−\epsilon\)approximation for certain boxes unless NP=BPP, but give a simple additive polynomialtime \(\epsilon\)approximation.

Bringmann, Karl; Friedrich, Tobias An Efficient Algorithm for Computing Hypervolume Contributions. Evolutionary Computation 2010: 383402
The hypervolume indicator serves as a sorting criterion in many recent multiobjective evolutionary algorithms (MOEAs). Typical algorithms remove the solution with the smallest loss with respect to the dominated hypervolume from the population. We present a new algorithm which determines for a population of size \(n\) with \(d\) objectives, a solution with minimal hypervolume contribution in time \(O(n^d/2 \log n)\) for \(d > 2\). This improves all previously published algorithms by a factor of \(n\) for all \(d > 3\) and by a factor of \(\sqrt n\) for \(d = 3\). We also analyze hypervolume indicator based optimization algorithms which remove \(\lambda > 1\) solutions from a population of size \(n = \mu + \lambda\). We show that there are populations such that the hypervolume contribution of iteratively chosen \(\lambda\) solutions is much larger than the hypervolume contribution of an optimal set of \(\lambda\) solutions. Selecting the optimal set of \(\lambda\) solutions implies calculating \(\binomn\mu\) conventional hypervolume contributions, which is considered to be computationally too expensive. We present the first hypervolume algorithm which directly calculates the contribution of every set of \(\lambda\) solutions. This gives an additive term of \(\binomn\mu\) in the runtime of the calculation instead of a multiplicative factor of \(\binomn\mu\). More precisely, for a population of size \(n\) with \(d\) objectives, our algorithm can calculate a set of \(\lambda\) solutions with minimal hypervolume contribution in time \(O(n^d/2 \log n + n^\lambda)\) for \(d > 2\). This improves all previously published algorithms by a factor of \(n^min\\lambda,d/2\ for \(d > 3\) and by a factor of \(n\) for \(d = 3\).

Cooper, Joshua N.; Doerr, Benjamin; Friedrich, Tobias; Spencer, Joel Deterministic random walks on regular trees. Random Structures and Algorithms 2010: 353366
Jim Propp's rotor router model is a deterministic analogue of a random walk on a graph. Instead of distributing chips randomly, each vertex serves its neighbors in a fixed order. Cooper and Spencer (Comb. Probab. Comput. (2006)) show a remarkable similarity of both models. If an (almost) arbitrary population of chips is placed on the vertices of a grid \(Z^d\) and does a simultaneous walk in the Propp model, then at all times and on each vertex, the number of chips deviates from the expected number the random walk would have gotten there, by at most a constant. This constant is independent of the starting configuration and the order in which each vertex serves its neighbors. This result raises the question if all graphs do have this property. With quite some effort, we are now able to answer this question negatively. For the graph being an infinite \(k\)ary tree \((k \ge 3)\), we show that for any deviation \(D\) there is an initial configuration of chips such that after running the Propp model for a certain time there is a vertex with at least \(D\) more chips than expected in the random walk model. However, to achieve a deviation of \(D\) it is necessary that at least \(k \Theta^(D)\) vertices contribute by being occupied by a number of chips not divisible by \(k\) in a certain time interval.

Friedrich, Tobias; Gairing, Martin; Sauerwald, Thomas Quasirandom Load Balancing. Symposium on Discrete Algorithms (SODA) 2010: 16201629
We propose a simple distributed algorithm for balancing indivisible tokens on graphs. The algorithm is completely deterministic, though it tries to imitate (and enhance) a randomized algorithm by keeping the accumulated rounding errors as small as possible. Our new algorithm, surprisingly, closely approximates the idealized process (where the tokens are divisible) on important network topologies. On \(d\)dimensional torus graphs with \(n\) nodes it deviates from the idealized process only by an additive constant. In contrast, the randomized rounding approach of Friedrich and Sauerwald [Proceedings of the 41st Annual ACM Symposium on Theory of Computing, 2009, pp. 121–130] can deviate up to \(\Omega(polylog(n))\), and the deterministic algorithm of Rabani, Sinclair, and Wanka [Proceedings of the 39th Annual IEEE Symposium on Foundations of Computer Science, 1998, pp. 694–705] has a deviation of \(\Omega(n^1/d)\). This makes our quasirandom algorithm the first known algorithm for this setting, which is optimal both in time and achieved smoothness. We further show that on the hypercube as well, our algorithm has a smaller deviation from the idealized process than the previous algorithms. To prove these results, we derive several combinatorial and probabilistic results that we believe to be of independent interest. In particular, we show that firstpassage probabilities of a random walk on a path with arbitrary weights can be expressed as a convolution of independent geometric probability distributions.

Friedrich, Tobias; Sauerwald, Thomas The Cover Time of Deterministic Random Walks. Electronic Journal of Combinatorics 2010
The rotor router model is a popular deterministic analogue of a random walk on a graph. Instead of moving to a random neighbor, the neighbors are served in a fixed order. We examine how fast this “deterministic random walk” covers all vertices (or all edges). We present general techniques to derive upper bounds for the vertex and edge cover time and derive matching lower bounds for several important graph classes. Depending on the topology, the deterministic random walk can be asymptotically faster, slower or equally fast compared to the classical random walk.

Bringmann, Karl; Friedrich, Tobias The maximum hypervolume set yields nearoptimal approximation. Genetic and Evolutionary Computation Conference (GECCO) 2010: 511518
In order to allow a comparison of (otherwise incomparable) sets, many evolutionary multiobjective optimizers use indicator functions to guide the search and to evaluate the performance of search algorithms. The most widely used indicator is the hypervolume indicator. It measures the volume of the dominated portion of the objective space. Though the hypervolume indicator is very popular, it has not been shown that maximizing the hypervolume indicator is indeed equivalent to the overall objective of finding a good approximation of the Pareto front. To address this question, we compare the optimal approximation factor with the approximation factor achieved by sets maximizing the hypervolume indicator. We bound the optimal approximation factor of \(n\) points by \(1+\Theta(1/n)\) for arbitrary Pareto fronts. Furthermore, we prove that the same asymptotic approximation ratio is achieved by sets of \(n\) points that maximize the hypervolume indicator. This shows that the speed of convergence of the approximation ratio achieved by maximizing the hypervolume indicator is asymptotically optimal. This implies that for large values of \(n\), sets maximizing the hypervolume indicator quickly approach the optimal approximation ratio. Moreover, our bounds show that also for relatively small values of \(n\), sets maximizing the hypervolume indicator achieve a nearoptimal approximation ratio.

Ajwani, Deepak; Friedrich, Tobias Averagecase analysis of incremental topological ordering. Discrete Applied Mathematics 2010: 240250
Many applications like pointer analysis and incremental compilation require maintaining a topological ordering of the nodes of a directed acyclic graph (DAG) under dynamic updates. All known algorithms for this problem are either only analyzed for worstcase insertion sequences or only evaluated experimentally on random DAGs. We present the first averagecase analysis of incremental topological ordering algorithms. We prove an expected runtime of \(O(n^2 polylog(n))\) under insertion of the edges of a complete DAG in a random order for the algorithms of Alpern et al. (1990) [4], Katriel and Bodlaender (2006) [18], and Pearce and Kelly (2006) [23].

Friedrich, Tobias; Hebbinghaus, Nils; Neumann, Frank Plateaus can be harder in MultiObjective Optimization. Theoretical Computer Science 2010: 854864
In recent years a lot of progress has been made in understanding the behavior of evolutionary computation methods for single and multiobjective problems. Our aim is to analyze the diversity mechanisms that are implicitly used in evolutionary algorithms for multiobjective problems by rigorous runtime analyses. We show that, even if the population size is small, the runtime can be exponential where corresponding singleobjective problems are optimized within polynomial time. To illustrate this behavior we analyze a simple plateau function in a first step and extend our result to a class of instances of the wellknown SetCover problem.

Baswana, Surender; Biswas, Somenath; Doerr, Benjamin; Friedrich, Tobias; Kurur, Piyush P.; Neumann, Frank Computing Single Source Shortest Paths using SingleObjective Fitness Functions. Foundations of Genetic Algorithms (FOGA) 2009: 5966
Runtime analysis of evolutionary algorithms has become an important part in the theoretical analysis of randomized search heuristics. The first combinatorial problem where rigorous runtime results have been achieved is the wellknown single source shortest path (SSSP) problem. Scharnow, Tinnefeld and Wegener [PPSN 2002, J. Math. Model. Alg. 2004] proposed a multiobjective approach which solves the problem in expected polynomial time. They also suggest a related singleobjective fitness function. However, it was left open whether this does solve the problem efficiently, and, in a broader context, whether multiobjective fitness functions for problems like the SSSP yield more efficient evolutionary algorithms. In this paper, we show that the single objective approach yields an efficient (1+1) EA with runtime bounds very close to those of the multiobjective approach.

Bringmann, Karl; Friedrich, Tobias Don't be greedy when calculating hypervolume contributions. Foundations of Genetic Algorithms (FOGA) 2009: 103112
Most hypervolume indicator based optimization algorithms like SIBEA [Zitzler et al. 2007], SMSEMOA [Beume et al. 2007], or MOCMAES [Igel et al. 2007] remove the solution with the smallest loss with respect to the dominated hypervolume from the population. This is usually iterated λ times until the size of the population no longer exceeds a fixed size μ. We show that there are populations such that the contributing hypervolume of the λ solutions chosen by this greedy selection scheme can be much smaller than the contributing hypervolume of an optimal set of λ solutions. Selecting the optimal λset implies calculating (μ+λ over μ) conventional hypervolume contributions, which is considered computationally too expensive. We present the first hypervolume algorithm which calculates directly the contribution of every set of λ solutions. This gives an additive term of (μ+λ over μ)in the runtime of the calculation instead of a multiplicative factor of binomial(μ+λ over μ). Given a population of size n = μ + λ, our algorithm can calculate a set of λ ≥ 1 solutions with minimal ddimensional hypervolume contribution in time O(n^d/2 log + n^λ) for d > 2. This improves all previously published algorithms by a factor of order n^min(λ, d/2) for d > 3. Therefore even if we remove the solutions one by one greedily as usual, we gain a speedup factor of n for all d > 3.

Brockhoff, Dimo; Friedrich, Tobias; Hebbinghaus, Nils; Klein, Christian; Neumann, Frank; Zitzler, Eckart On the Effects of Adding Objectives to Plateau Functions. Transactions on Evolutionary Computation 2009: 591603
In this paper, we examine how adding objectives to a given optimization problem affects the computational effort required to generate the set of Paretooptimal solutions. Experimental studies show that additional objectives may change the running time behavior of an algorithm drastically. Often it is assumed that more objectives make a problem harder as the number of different tradeoffs may increase with the problem dimension. We show that additional objectives, however, may be both beneficial and obstructive depending on the chosen objective. Our results are obtained by rigorous running time analyses that show the different effects of adding objectives to a wellknown plateau function. Additional experiments show that the theoretically shown behavior can be observed for problems with more than one objective.

Bringmann, Karl; Friedrich, Tobias Approximating the Least Hypervolume Contributor: NPHard in General, But Fast in Practice. Evolutionary MultiCriterion Optimization (EMO) 2009: 620
The hypervolume indicator is an increasingly popular set measure to compare the quality of two Pareto sets. The basic ingredient of most hypervolume indicator based optimization algorithms is the calculation of the hypervolume contribution of single solutions regarding a Pareto set. We show that exact calculation of the hypervolume contribution is #Phard while its approximation is NPhard. The same holds for the calculation of the minimal contribution. We also prove that it is NPhard to decide whether a solution has the least hypervolume contribution. Even deciding whether the contribution of a solution is at most \((1 +\epsilon)\) times the minimal contribution is NPhard. This implies that it is neither possible to efficiently find the least contributing solution (unless P = NP) nor to approximate it (unless NP = BPP). Nevertheless, in the second part of the paper we present a very fast approximation algorithm for this problem. We prove that for arbitrarily given \(\epsilon, \delta > 0\) it calculates a solution with contribution at most \((1 +\epsilon)\) times the minimal contribution with probability at least \((1 −\delta)\). Though it cannot run in polynomial time for all instances, it performs extremely fast on various benchmark datasets. The algorithm solves very large problem instances which are intractable for exact algorithms (e.g., 10000 solutions in 100 dimensions) within a few seconds.

Doerr, Benjamin; Friedrich, Tobias; Künnemann, Marvin; Sauerwald, Thomas Quasirandom Rumor Spreading: An Experimental Analysis. Algorithm Engineering and Experiments (ALENEX) 2009: 145153
We empirically analyze two versions of the wellknown "randomized rumor spreading" protocol to disseminate a piece of information in networks. In the classical model, in each round each informed node informs a random neighbor. At SODA 2008, three of the authors proposed a quasirandom variant. Here, each node has a (cyclic) list of its neighbors. Once informed, it starts at a random position of the list, but from then on informs its neighbors in the order of the list. While for sparse random graphs a better performance of the quasirandom model could be proven, all other results show that, independent of the structure of the lists, the same asymptotic performance guarantees hold as for the classical model. In this work, we compare the two models experimentally. This not only shows that the quasirandom model generally is faster (which was expected, though maybe not to this extent), but also that the runtime is more concentrated around the mean value (which is surprising given that much fewer random bits are used in the quasirandom process). These advantages are also observed in a lossy communication model, where each transmission does not reach its target with a certain probability, and in an asynchronous model, where nodes send at random times drawn from an exponential distribution. We also show that the particular structure of the lists has little influence on the efficiency. In particular, there is no problem if all nodes use an identical order to inform their neighbors.

Doerr, Benjamin; Friedrich, Tobias; Sauerwald, Thomas Quasirandom Rumor Spreading: Expanders, Push vs. Pull, and Robustness. International Colloquium on Automata, Languages and Programming (ICALP) 2009: 366377
Randomized rumor spreading is an efficient protocol to distribute information in networks. Recently, a quasirandom version has been proposed and proven to work equally well on many graphs and better for sparse random graphs. In this work we show three main results for the quasirandom rumor spreading model. We exhibit a natural expansion property for networks which suffices to make quasirandom rumor spreading inform all nodes of the network in logarithmic time with high probability. This expansion property is satisfied, among others, by many expander graphs, random regular graphs, and ErdősRényi random graphs. For all network topologies, we show that if one of the push or pull model works well, so does the other. We also show that quasirandom rumor spreading is robust against transmission failures. If each message sent out gets lost with probability \(f\), then the runtime increases only by a factor of \(O(1/(1−f))\).

Friedrich, Tobias; Oliveto, Pietro Simone; Sudholt, Dirk; Witt, Carsten Analysis of DiversityPreserving Mechanisms for Global Exploration. Evolutionary Computation 2009: 455476
Maintaining diversity is important for the performance of evolutionary algorithms. Diversitypreserving mechanisms can enhance global exploration of the search space and enable crossover to find dissimilar individuals for recombination. We focus on the global exploration capabilities of mutationbased algorithms. Using a simple bimodal test function and rigorous runtime analyses, we compare wellknown diversitypreserving mechanisms like deterministic crowding, fitness sharing, and others with a plain algorithm without diversification. We show that diversification is necessary for global exploration, but not all mechanisms succeed in finding both optima efficiently. Our theoretical results are accompanied by additional experiments for different population sizes.

Friedrich, Tobias; Horoba, Christian; Neumann, Frank Multiplicative approximations and the hypervolume indicator. Genetic and Evolutionary Computation Conference (GECCO) 2009: 571578
Indicatorbased algorithms have become a very popular approach to solve multiobjective optimization problems. In this paper, we contribute to the theoretical understanding of algorithms maximizing the hypervolume for a given problem by distributing \(\mu\) points on the Pareto front. We examine this common approach with respect to the achieved multiplicative approximation ratio for a given multiobjective problem and relate it to a set of \(\mu\) points on the Pareto front that achieves the best possible approximation ratio. For the class of linear fronts and a class of concave fronts, we prove that the hypervolume gives the best possible approximation ratio. In addition, we examine Pareto fronts of different shapes by numerical calculations and show that the approximation computed by the hypervolume may differ from the optimal approximation ratio.

Friedrich, Tobias; He, Jun; Hebbinghaus, Nils; Neumann, Frank; Witt, Carsten Analyses of Simple Hybrid Algorithms for the Vertex Cover Problem. Evolutionary Computation 2009: 319
Hybrid methods are very popular for solving problems from combinatorial optimization. In contrast, the theoretical understanding of the interplay of different optimization methods is rare. In this paper, we make a first step into the rigorous analysis of such combinations for combinatorial optimization problems. The subject of our analyses is the vertex cover problem for which several approximation algorithms have been proposed. We point out specific instances where solutions can (or cannot) be improved by the search process of a simple evolutionary algorithm in expected polynomial time.

Friedrich, Tobias; Sauerwald, Thomas; Vilenchik, Dan Smoothed Analysis of Balancing Networks. International Colloquium on Automata, Languages, and Programming (ICALP) 2009: 472483
In a balancing network each processor has an initial collection of unitsize jobs (tokens) and in each round, pairs of processors connected by balancers split their load as evenly as possible. An excess token (if any) is placed according to some predefined rule. As it turns out, this rule crucially affects the performance of the network. In this work we propose a model that studies this effect. We suggest a model bridging the uniformlyrandom assignment rule, and the arbitrary one (in the spirit of smoothedanalysis). We start with an arbitrary assignment of balancer directions and then flip each assignment with probability \(\alpha\) independently. For a large class of balancing networks our result implies that after \(O(\log n)\) rounds the discrepancy is \(O((1/2 − \alpha) \log n + \log \log n)\) with high probability. This matches and generalizes known upper bounds for \(\alpha = 0\) and \(\alpha = 1/2\). We also show that a natural network matches the upper bound for any \(\alpha\).

Friedrich, Tobias; Sauerwald, Thomas Nearperfect load balancing by randomized rounding. Symposium on Theory of Computing (STOC) 2009: 121130
We consider and analyze a new algorithm for balancing indivisible loads on a distributed network with \(n\) processors. The aim is minimizing the discrepancy between the maximum and minimum load. In every timestep paired processors balance their load as evenly as possible. The direction of the excess token is chosen according to a randomized rounding of the participating loads. We prove that in comparison to the corresponding model of Rabani, Sinclair, and Wanka (1998) with arbitrary roundings, the randomization yields an improvement of roughly a square root of the achieved discrepancy in the same number of timesteps on all graphs. For the important case of expanders we can even achieve a constant discrepancy in \(O(\log n (\log \log n)^3)\) rounds. This is optimal up to \(\log \log n\)factors while the best previous algorithms in this setting either require \(\Omega(\log^2 n)\) time or can only achieve a logarithmic discrepancy. This result also demonstrates that with randomized rounding the difference between discrete and continuous load balancing vanishes almost completely.

Doerr, Benjamin; Friedrich, Tobias; Sauerwald, Thomas Quasirandom Rumor Spreading on Expanders. Electronic Notes in Discrete Mathematics 2009: 243247
Randomized rumor spreading is an efficient way to distribute information in networks. Recently, a quasirandom version of this protocol has been proposed. It was proven that it works equally well or even better in many settings. In this work, we exhibit a natural expansion property for networks, which ensures that quasirandom rumor spreading informs all nodes of the network in logarithmic time with high probability. This expansion property is satisfied, among others, by many expander graphs, random regular graphs, and ErdősRényi random graphs.

Friedrich, Tobias; Hebbinghaus, Nils; Neumann, Frank Comparison of simple diversity mechanisms on plateau functions. Theoretical Computer Science 2009: 24552462
It is widely assumed and observed in experiments that the use of diversity mechanisms in evolutionary algorithms may have a great impact on its running time. Up to now there is no rigorous analysis pointing out how different diversity mechanisms influence the runtime behavior. We consider evolutionary algorithms that differ from each other in the way they ensure diversity and point out situations where the right mechanism is crucial for the success of the algorithm. The considered evolutionary algorithms either diversify the population with respect to the search points or with respect to function values. Investigating simple plateau functions, we show that using the ''right'' diversity strategy makes the difference between an exponential and a polynomial runtime. Later on, we examine how the drawback of the ''wrong'' diversity mechanism can be compensated by increasing the population size.

Friedrich, Tobias; Neumann, Frank When to use bitwise neutrality. Congress on Evolutionary Computation (CEC) 2008: 9971003
Representation techniques are important issues when designing successful evolutionary algorithms. Within this field the use of neutrality plays an important role. We examine the use of bitwise neutrality introduced by Poli and López (2007) from a theoretical point of view and show that this mechanism only enhances mutationbased evolutionary algorithms if not the same number of genotypic bits for each phenotypic bit is used. Using different numbers of genotypic bits for the bits in the phenome we point out by rigorous runtime analyses that it may reduce the optimization time significantly.

Ajwani, Deepak; Friedrich, Tobias; Meyer, Ulrich An \(O(n^2.75)\) algorithm for incremental topological ordering. Transactions on Algorithms 2008
We present a simple algorithm which maintains the topological order of a directed acyclic graph (DAG) with n nodes, under an online edge insertion sequence, in \(O(n^2.75)\) time, independent of the number m of edges inserted. For dense DAGs, this is an improvement over the previous best result of \(O(\min(m^3/2 \log n, m^3/2 + n^2 \log n))\) by Katriel and Bodlaender 2006. We also provide an empirical comparison of our algorithm with other algorithms for incremental topological sortingent a simple algorithm which maintains the topological order of a directed acyclic graph (DAG) with n nodes, under an online edge insertion sequence, in \(O(n^2.75)\) time, independent of the number m of edges inserted. For dense DAGs, this is an improvement over the previous best result of \(O(\min(m^3/2 \log n, m^3/2 + n^2 \log n))\) by Katriel and Bodlaender 2006. We also provide an empirical comparison of our algorithm with other algorithms for incremental topological sorting.

Brockhoff, Dimo; Friedrich, Tobias; Neumann, Frank Analyzing Hypervolume Indicator Based Algorithms. International Conference on Parallel Problem Solving from Nature (PPSN) 2008: 651660
Indicatorbased methods to tackle multiobjective problems have become popular recently, mainly because they allow to incorporate user preferences into the search explicitly. Multiobjective Evolutionary Algorithms (MOEAs) using the hypervolume indicator in particular showed better performance than classical MOEAs in experimental comparisons. In this paper, the use of indicatorbased MOEAs is investigated for the first time from a theoretical point of view. We carry out running time analyses for an evolutionary algorithm with a \((\mu + 1)\)selection scheme based on the hypervolume indicator as it is used in most of the recently proposed MOEAs. Our analyses point out two important aspects of the search process. First, we examine how such algorithms can approach the Pareto front. Later on, we point out how they can achieve a good approximation for an exponentially large Pareto front.

Bringmann, Karl; Friedrich, Tobias Approximating the Volume of Unions and Intersections of HighDimensional Geometric Objects. International Symposium on Algorithms and Computation (ISAAC) 2008: 436447
We consider the computation of the volume of the union of highdimensional geometric objects. While showing that this problem is #Phard already for very simple bodies (i.e., axisparallel boxes), we give a fast FPRAS for all objects where one can: (1) test whether a given point lies inside the object, (2) sample a point uniformly, (3) calculate the volume of the object in polynomial time. All three oracles can be weak, that is, just approximate. This implies that Klee's measure problem and the hypervolume indicator can be approximated efficiently even though they are #Phard and hence cannot be solved exactly in time polynomial in the number of dimensions unless P=NP. Our algorithm also allows to approximate efficiently the volume of the union of convex bodies given by weak membership oracles. For the analogous problem of the intersection of highdimensional geometric objects we prove #Phardness for boxes and show that there is no multiplicative polynomialtime \(2^d^1−\epsilon\)approximation for certain boxes unless NP=BPP, but give a simple additive polynomialtime \(\epsilon\)approximation.

Friedrich, Tobias; Oliveto, Pietro Simone; Sudholt, Dirk; Witt, Carsten Theoretical analysis of diversity mechanisms for global exploration. Genetic and Evolutionary Computation Conference (GECCO) 2008: 945952
Maintaining diversity is important for the performance of evolutionary algorithms. Diversitypreserving mechanisms can enhance global exploration of the search space and enable crossover to find dissimilar individuals for recombination. We focus on the global exploration capabilities of mutationbased algorithms. Using a simple bimodal test function and rigorous runtime analyses, we compare wellknown diversitypreserving mechanisms like deterministic crowding, fitness sharing, and others with a plain algorithm without diversification. We show that diversification is necessary for global exploration, but not all mechanisms succeed in finding both optima efficiently. Our theoretical results are accompanied by additional experiments for different population sizes.

Friedrich, Tobias; Horoba, Christian; Neumann, Frank Runtime Analyses for Using Fairness in Evolutionary MultiObjective Optimization. Parallel Problem Solving from Nature (PPSN) 2008: 671680
It is widely assumed that evolutionary algorithms for multiobjective optimization problems should use certain mechanisms to achieve a good spread over the Pareto front. In this paper, we examine such mechanisms from a theoretical point of view and analyze simple algorithms incorporating the concept of fairness introduced by Laumanns et al.[7]. This mechanism tries to balance the number of offspring of all individuals in the current population. We rigorously analyze the runtime behavior of different fairness mechanisms and present showcase examples to point out situations where the right mechanism can speed up the optimization process significantly.

Friedrich, Tobias; Doerr, Benjamin; Klein, Christian; Osbild, Ralf Unbiased Matrix Rounding. Electronic Notes in Discrete Mathematics 2007: 4146
We show several ways to round a real matrix to an integer one such that the rounding errors in all rows and columns as well as the whole matrix are less than one. This is a classical problem with applications in many fields, in particular, statistics. We improve earlier solutions of different authors in two ways. For rounding matrices of size \(m \times n\), we reduce the runtime from \(O((mn)^2)\) to \(O(mn \log(mn))\). Second, our roundings also have a rounding error of less than one in all initial intervals of rows and columns. Consequently, arbitrary intervals have an error of at most two. This is particularly useful in the statistics application of controlled rounding. The same result can be obtained via (dependent) randomized rounding. This has the additional advantage that the rounding is unbiased, that is, for all entries \(y_ij\) of our rounding, we have \(E(y_ij) = x_ij\) , where \(x_ij\) is the corresponding entry of the input matrix.

Friedrich, Tobias; Hebbinghaus, Nils; Neumann, Frank Rigorous analyses of simple diversity mechanisms. Genetic and Evolutionary Computation Conference (GECCO) 2007: 12191225
It is widely assumed and observed in experiments that the use of diversity mechanisms in evolutionary algorithms may have a great impact on its running time. Up to now there is no rigorous analysis pointing out the use of different mechanisms with respect to the runtime behavior. We consider evolutionary algorithms that differ from each other in the way they ensure diversity and point out situations where the right mechanism is crucial for the success of the algorithm. The algorithms considered either diversify the population with respect to the search points or with respect to function values. Investigating simple plateau functions, we show that using the "right" diversity strategy makes the difference between an exponential and a polynomial runtime.

Friedrich, Tobias; He, Jun; Hebbinghaus, Nils; Neumann, Frank; Witt, Carsten On improving approximate solutions by evolutionary algorithms. Congress on Evolutionary Computation (CEC) 2007: 26142621
Hybrid methods are very popular for solving problems from combinatorial optimization. In contrast to this the theoretical understanding of the interplay of different optimization methods is rare. The aim of this paper is to make a first step into the rigorous analysis of such combinations for combinatorial optimization problems. The subject of our analyses is the vertex cover problem for which several approximation algorithms have been proposed. We point out specific instances where solutions can (or cannot) be improved by the search process of a simple evolutionary algorithm in expected polynomial time.

Ajwani, Deepak; Friedrich, Tobias AverageCase Analysis of Online Topological Ordering. International Symposium on Algorithms and Computation (ISAAC) 2007: 464475
Many applications like pointer analysis and incremental compilation require maintaining a topological ordering of the nodes of a directed acyclic graph (DAG) under dynamic updates. All known algorithms for this problem are either only analyzed for worstcase insertion sequences or only evaluated experimentally on random DAGs. We present the first averagecase analysis of online topological ordering algorithms. We prove an expected runtime of \(O(n^2 polylog(n))\) under insertion of the edges of a complete DAG in a random order for the algorithms of Alpern et al. (SODA, 1990), Katriel and Bodlaender (TALG, 2006), and Pearce and Kelly (JEA, 2006). This is much less than the best known worstcase bound \(O(n^2.75)\) for this problem.

Brockhoff, Dimo; Friedrich, Tobias; Hebbinghaus, Nils; Klein, Christian; Neumann, Frank; Zitzler, Eckart Do additional objectives make a problem harder?. Genetic and Evolutionary Computation Conference (GECCO) 2007: 765772
In this paper, we examine how adding objectives to a given optimization problem affects the computation effort required to generate the set of Paretooptimal solutions. Experimental studies show that additional objectives may change the runtime behavior of an algorithm drastically. Often it is assumed that more objectives make a problem harder as the number of different tradeoffs may increase with the problem dimension. We show that additional objectives, however, may be both beneficial and obstructive depending on the chosen objective. Our results are obtained by rigorous runtime analyses that show the different effects of adding objectives to a wellknown plateaufunction.