Clean Citation Style 002
{ "authors" : [{ "lastname":"Bläsius" , "initial":"T" , "url":"https://hpi.de/friedrich/publications/people/thomas-blaesius.html" , "mail":"thomas.blasius(at)hpi.de" }, { "lastname":"Casel" , "initial":"K" , "url":"https://hpi.de/friedrich/publications/people/katrin-casel.html" , "mail":"katrin.casel(at)hpi.de" }, { "lastname":"Chauhan" , "initial":"A" , "url":"https://hpi.de/friedrich/publications/people/ankit-chauhan.html" , "mail":"ankit.chauhan(at)hpi.de" }, { "lastname":"Cohen" , "initial":"S" , "url":"https://hpi.de/friedrich/publications/people/sarel-cohen.html" , "mail":"sarel.cohen(at)hpi.de" }, { "lastname":"Cseh" , "initial":"" , "url":"https://hpi.de/friedrich/publications/people/agnes-cseh.html" , "mail":"agnes.cseh(at)hpi.de" }, { "lastname":"Doskoč" , "initial":"V" , "url":"https://hpi.de/friedrich/publications/people/vanja-doskoc.html" , "mail":"vanja.doskoc(at)hpi.de" }, { "lastname":"Elijazyfer" , "initial":"Z" , "url":"https://hpi.de/friedrich/people/ziena-elijazyfer.html" , "mail":"ziena.elijazyfer(at)hpi.de" }, { "lastname":"Fischbeck" , "initial":"P" , "url":"https://hpi.de/friedrich/publications/people/philipp-fischbeck.html" , "mail":"philipp.fischbeck(at)hpi.de" }, { "lastname":"Friedrich" , "initial":"T" , "url":"https://hpi.de/friedrich/publications/people/tobias-friedrich.html" , "mail":"friedrich(at)hpi.de" }, { "lastname":"Göbel" , "initial":"A" , "url":"https://hpi.de/friedrich/publications/people/andreas-goebel.html" , "mail":"andreas.goebel(at)hpi.de" }, { "lastname":"Issac" , "initial":"D" , "url":"https://hpi.de/friedrich/publications/people/davis-issac.html" , "mail":"davis.issac(at)hpi.de" }, { "lastname":"Katzmann" , "initial":"M" , "url":"https://hpi.de/friedrich/publications/people/maximilian-katzmann.html" , "mail":"maximilian.katzmann(at)hpi.de" }, { "lastname":"Khazraei" , "initial":"A" , "url":"https://hpi.de/friedrich/publications/people/ardalan-khazraei.html" , "mail":"ardalan.khazraei(at)hpi.de" }, { "lastname":"Kötzing" , "initial":"T" , "url":"https://hpi.de/friedrich/publications/people/timo-koetzing.html" , "mail":"timo.koetzing(at)hpi.de" }, { "lastname":"Krejca" , "initial":"M" , "url":"https://hpi.de/friedrich/publications/people/martin-krejca.html" , "mail":"martin.krejca(at)hpi.de" }, { "lastname":"Krogmann" , "initial":"S" , "url":"https://hpi.de/friedrich/publications/people/simon-krogmann.html" , "mail":"simon.krogmann(at)hpi.de" }, { "lastname":"Krohmer" , "initial":"A" , "url":"https://hpi.de/friedrich/publications/people/anton-krohmer.html" , "mail":"none" }, { "lastname":"Kumar" , "initial":"N" , "url":"https://hpi.de/friedrich/publications/people/nikhil-kumar.html" , "mail":"nikhil.kumar(at)hpi.de" }, { "lastname":"Lagodzinski" , "initial":"G" , "url":"https://hpi.de/friedrich/publications/people/gregor-lagodzinski.html" , "mail":"gregor.lagodzinski(at)hpi.de" }, { "lastname":"Lenzner" , "initial":"P" , "url":"https://hpi.de/friedrich/publications/people/pascal-lenzner.html" , "mail":"pascal.lenzner(at)hpi.de" }, { "lastname":"Melnichenko" , "initial":"A" , "url":"https://hpi.de/friedrich/publications/people/anna-melnichenko.html" , "mail":"anna.melnichenko(at)hpi.de" }, { "lastname":"Molitor" , "initial":"L" , "url":"https://hpi.de/friedrich/publications/people/louise-molitor.html" , "mail":"louise.molitor(at)hpi.de" }, { "lastname":"Neubert" , "initial":"S" , "url":"https://hpi.de/friedrich/publications/people/stefan-neubert.html" , "mail":"stefan.neubert(at)hpi.de" }, { "lastname":"Pappik" , "initial":"M" , "url":"https://hpi.de/friedrich/publications/people/marcus-pappik.html" , "mail":"marcus.pappik(at)hpi.de" }, { "lastname":"Quinzan" , "initial":"F" , "url":"https://hpi.de/friedrich/publications/people/francesco-quinzan.html" , "mail":"francesco.quinzan(at)hpi.de" }, { "lastname":"Rizzo" , "initial":"M" , "url":"https://hpi.de/friedrich/publications/people/manuel-rizzo.html" , "mail":"manuel.rizzo(at)hpi.de" }, { "lastname":"Rothenberger" , "initial":"R" , "url":"https://hpi.de/friedrich/publications/people/ralf-rothenberger.html" , "mail":"ralf.rothenberger(at)hpi.de" }, { "lastname":"Schirneck" , "initial":"M" , "url":"https://hpi.de/friedrich/publications/people/martin-schirneck.html" , "mail":"martin.schirneck(at)hpi.de" }, { "lastname":"Seidel" , "initial":"K" , "url":"https://hpi.de/friedrich/publications/people/karen-seidel.html" , "mail":"karen.seidel(at)hpi.de" }, { "lastname":"Sutton" , "initial":"A" , "url":"https://hpi.de/friedrich/publications/people/andrew-m-sutton.html" , "mail":"none" }, { "lastname":"Weyand" , "initial":"C" , "url":"https://hpi.de/friedrich/publications/people/christopher-weyand.html" , "mail":"none" }]}
Frahnow, Clemens; Kötzing, TimoRing Migration Topology Helps Bypassing Local Optima. Parallel Problem Solving From Nature (PPSN) 2018: 129-140
Running several evolutionary algorithms in parallel and occasionally exchanging good solutions is referred to as island models. The idea is that the independence of the different islands leads to diversity, thus possibly exploring the search space better. Many theoretical analyses so far have found a complete (or sufficiently quickly expanding) topology as underlying migration graph most efficient for optimization, even though a quick dissemination of individuals leads to a loss of diversity. We suggest a simple fitness function Fork with two local optima parametrized by \(r \geq 2\) and a scheme for composite fitness functions. We show that, while the (1+1) EA gets stuck in a bad local optimum and incurs a run time of \(\Theta(n^{2r})\) fitness evaluations on Fork, island models with a complete topology can achieve a run time of \(\Theta(n^{1.5r})\) by making use of rare migrations in order to explore the search space more effectively. Finally, the ring topology, making use of rare migrations and a large diameter, can achieve a run time of \(\tilde{\Theta}(n^r)\), the black box complexity of Fork. This shows that the ring topology can be preferable over the complete topology in order to maintain diversity.
Friedrich, Tobias; Göbel, Andreas; Quinzan, Francesco; Wagner, MarkusHeavy-tailed Mutation Operators in Single-Objective Combinatorial Optimization. Parallel Problem Solving From Nature (PPSN) 2018: 134-145
A core feature of evolutionary algorithms is their mutation operator. Recently, much attention has been devoted to the study of mutation operators with dynamic and non-uniform mutation rates. Following up on this line of work, we propose a new mutation operator and analyze its performance on the (1+1) Evolutionary Algorithm (EA). Our analyses show that this mutation operator competes with pre-existing ones, when used by the (1+1)EA on classes of problems for which results on the other mutation operators are available. We present a jump function for which the performance of the (1+1)EA using any static uniform mutation and any restart strategy can be worse than the performance of the (1+1)EA using our mutation operator with no restarts. We show that the (1+1)EA using our mutation operator finds a (1/3)-approximation ratio on any non-negative submodular function in polynomial time. This performance matches that of combinatorial local search algorithms specifically designed to solve this problem. Finally, we evaluate experimentally the performance of the (1+1)EA using our operator, on real-world graphs of different origins with up to \(\sim\)37,000 vertices and \(\sim\)1.6 million edges. In comparison with uniform mutation and a recently proposed dynamic scheme our operator comes out on top on these instances.
Kötzing, Timo; Krejca, Martin S.First-Hitting Times Under Additive Drift. Parallel Problem Solving From Nature (PPSN) 2018: 92-104
For the last ten years, almost every theoretical result concerning the expected run time of a randomized search heuristic used drift theory, making it the arguably most important tool in this domain. Its success is due to its ease of use and its powerful result: drift theory allows the user to derive bounds on the expected first-hitting time of a random process by bounding expected local changes of the process – the drift. This is usually far easier than bounding the expected first-hitting time directly. Due to the widespread use of drift theory, it is of utmost importance to have the best drift theorems possible. We improve the fundamental additive, multiplicative, and variable drift theorems by stating them in a form as general as possible and providing examples of why the restrictions we keep are still necessary. Our additive drift theorem for upper bounds only requires the process to be nonnegative, that is, we remove unnecessary restrictions like a finite, discrete, or bounded search space. As corollaries, the same is true for our upper bounds in the case of variable and multiplicative drift
Kötzing, Timo; Krejca, Martin S.First-Hitting Times for Finite State Spaces. Parallel Problem Solving From Nature (PPSN) 2018: 79-91
One of the most important aspects of a randomized algorithm is bounding its expected run time on various problems. Formally speaking, this means bounding the expected first-hitting time of a random process. The two arguably most popular tools to do so are the fitness level method and drift theory. The fitness level method considers arbitrary transition probabilities but only allows the process to move toward the goal. On the other hand, drift theory allows the process to move into any direction as long as it moves closer to the goal in expectation; however, this tendency has to be monotone and, thus, the transition probabilities cannot be arbitrary. We provide a result that combines the benefit of these two approaches: our result gives a lower and an upper bound for the expected first-hitting time of a random process over \(\{0, \ldots ,n\}\) that is allowed to move forward and backward by 1 and can use arbitrary transition probabilities. In case that the transition probabilities are known, our bounds coincide and yield the exact value of the expected first-hitting time. Further, we also state the stationary distribution as well as the mixing time of a special case of our scenario.
Kötzing, Timo; Lagodzinski, J. A. Gregor; Lengler, Johannes; Melnichenko, AnnaDestructiveness of Lexicographic Parsimony Pressure and Alleviation by a Concatenation Crossover in Genetic Programming. Parallel Problem Solving From Nature (PPSN) 2018: 42-54
For theoretical analyses there are two specifics distinguishing GP from many other areas of evolutionary computation. First, the variable size representations, in particular yielding a possible bloat (i.e. the growth of individuals with redundant parts). Second, the role and realization of crossover, which is particularly central in GP due to the tree-based representation. Whereas some theoretical work on GP has studied the effects of bloat, crossover had a surprisingly little share in this work. We analyze a simple crossover operator in combination with local search,where a preference for small solutions minimizes bloat (lexicographic parsimony pressure); the resulting algorithm is denoted ConcatenationCrossover GP. For this purpose three variants of the well-studied Majority test function with large plateaus are considered. We show that the Concatenation Crossover GP can efficiently optimize these test functions,while local search cannot be efficient for all three variants independent of employing bloat control.
Doerr, Benjamin; Doerr, Carola; Kötzing, TimoProvably Optimal Self-Adjusting Step Sizes for Multi-Valued Decision Variables. Parallel Problem Solving From Nature (PPSN) 2016: 782-791
We regard the problem of maximizing a OneMax-like function defined over an alphabet of size \(r\). In previous work [GECCO 2016] we have investigated how three different mutation operators influence the performance of Randomized Local Search (RLS) and the (1+1) Evolutionary Algorithm. This work revealed that among these natural mutation operators none is superior to the other two for any choice of \(r\). We have also given in [GECCO 2016] some indication that the best achievable run time for large \(r\) is \(\Theta(n log r(\log n + \log r))\), regardless of how the mutation operator is chosen, as long as it is a static choice (i.e., the distribution used for variation of the current individual does not change over time). Here in this work we show that we can achieve a better performance if we allow for adaptive mutation operators. More precisely, we analyze the performance of RLS using a self-adjusting mutation strength. In this algorithm the size of the steps taken in each iteration depends on the success of previous iterations. That is, the mutation strength is increased after a successful iteration and it is decreased otherwise. We show that this idea yields an expected optimization time of \(\Theta(n(\log n + \log r))\), which is optimal among all comparison-based search heuristics. This is the first time that self-adjusting parameter choices are shown to outperform static choices on a discrete multi-valued optimization problem.
Gao, Wanru; Friedrich, Tobias; Neumann, FrankFixed-Parameter Single Objective Search Heuristics for Minimum Vertex Cover. Parallel Problem Solving From Nature (PPSN) 2016: 740-750
We consider how well-known branching approaches for the classical minimum vertex cover problem can be turned into randomized initialization strategies with provable performance guarantees and investigate them by experimental investigations. Furthermore, we show how these techniques can be built into local search components and analyze a basic local search variant that is similar to a state-of-the-art approach called NuMVC. Our experimental results for the two local search approaches show that making use of more complex branching strategies in the local search component can lead to better results on various benchmark graphs.
Friedrich, Tobias; Kötzing, Timo; Krejca, Martin S.; Sutton, Andrew M.Graceful Scaling on Uniform versus Steep-Tailed Noise. Parallel Problem Solving From Nature (PPSN) 2016: 761-770
Recently, different evolutionary algorithms (EAs) have been analyzed in noisy environments. The most frequently used noise model for this was additive posterior noise (noise added after the fitness evaluation) taken from a Gaussian distribution. In particular, for this setting it was shown that the \((\mu + 1)\)-EA on OneMax does not scale gracefully (higher noise cannot efficiently be compensated by higher \(\mu\)). In this paper we want to understand whether there is anything special about the Gaussian distribution which makes the \((\mu + 1)\)-EA not scale gracefully. We keep the setting of posterior noise, but we look at other distributions. We see that for exponential tails the \((\mu + 1)\)-EA on OneMax does also not scale gracefully, for similar reasons as in the case of Gaussian noise. On the other hand, for uniform distributions (as well as other, similar distributions) we see that the \((\mu + 1)\)-EA on OneMax does scale gracefully, indicating the importance of the noise model.
Friedrich, Tobias; Kötzing, Timo; Sutton, Andrew M.On the Robustness of Evolving Populations. Parallel Problem Solving From Nature (PPSN) 2016: 771-781
Most theoretical work that studies the benefit of recombination focuses on the ability of crossover to speed up optimization time on specific search problems. In this paper, we take a slightly different perspective and investigate recombination in the context of evolving solutions that exhibit \(\emph{mutational}\) robustness, i.e., they display insensitivity to small perturbations. Various models in population genetics have demonstrated that increasing the effective recombination rate promotes the evolution of robustness. We show this result also holds in the context of evolutionary computation by proving crossover promotes the evolution of robust solutions in the standard \((\mu+1)\) GA. Surprisingly, our results show that the effect is present even when robust solutions are at a selective disadvantage due to lower fitness values.
Dang, Duc-Cuong; Lehre, Per Kristian; Friedrich, Tobias; Kötzing, Timo; Krejca, Martin S.; Oliveto, Pietro S.; Sudholt, Dirk; Sutton, Andrew M.Emergence of Diversity and its Benefits for Crossover in Genetic Algorithms. Parallel Problem Solving From Nature (PPSN) 2016: 890-900
Population diversity is essential for avoiding premature convergence in Genetic Algorithms (GAs) and for the effective use of crossover. Yet the dynamics of how diversity emerges in populations are not well understood. We use rigorous runtime analysis to gain insight into population dynamics and GA performance for a standard \((\mu+1)\) GA and the \(Jump_k\) test function. By studying the stochastic process underlying the size of the largest collection of identical genotypes we show that the interplay of crossover followed by mutation may serve as a catalyst leading to a sudden burst of diversity. This leads to improvements of the expected optimisation time of order \(\Omega(n/ \log n)\) compared to mutation-only algorithms like the \((1+1)\) EA.
Friedrich, Tobias; Neumann, FrankMaximizing Submodular Functions under Matroid Constraints by Multi-objective Evolutionary Algorithms. Parallel Problem Solving from Nature (PPSN) 2014: 922-931
Nominated for Best Paper Award
Many combinatorial optimization problems have underlying goal functions that are submodular. The classical goal is to find a good solution for a given submodular function f under a given set of constraints. In this paper, we investigate the runtime of a multi-objective evolutionary algorithm called GSEMO until it has obtained a good approximation for submodular functions. For the case of monotone submodular functions and uniform cardinality constraints we show that GSEMO achieves a \((1 - 1/e)\)-approximation in expected time \(O(n^2(\log n+k))\), where \(k\) is the value of the given constraint. For the case of non-monotone submodular functions with \(k\) matroid intersection constraints, we show that GSEMO achieves a \((1/(k + 2 + 1/k + \epsilon)\)-approximation in expected time \(O(n^{k+5\log(n)/\epsilon)\).
Bringmann, Karl; Friedrich, Tobias; Klitzke, PatrickGeneric Postprocessing via Subset Selection for Hypervolume and Epsilon-Indicator. Parallel Problem Solving from Nature (PPSN) 2014: 518-527
Most biobjective evolutionary algorithms maintain a population of fixed size \(\mu\) and return the final population at termination. During the optimization process many solutions are considered, but most are discarded. We present two generic postprocessing algorithms which utilize the archive of all non-dominated solutions evaluated during the search. We choose the best \(\mu\) solutions from the archive such that the hypervolume or \(\epsilon\)-indicator is maximized. This postprocessing costs no additional fitness function evaluations and has negligible runtime compared to most EMOAs. We experimentally examine our postprocessing for four standard algorithms (NSGA-II, SPEA2, SMS-EMOA, IBEA) on ten standard test functions (DTLZ 1-2,7, ZDT 1-3, WFG 3-6) and measure the average quality improvement. The median decrease of the distance to the optimal \(\epsilon\)-indicator is \(95\%\), the median decrease of the distance to the optimal hypervolume value is \(86\%\). We observe similar performance on a real-world problem (wind turbine placement).
Sutton, Andrew M.; Neumann, FrankRuntime Analysis of Evolutionary Algorithms on Randomly Constructed High-Density Satisfiable 3-CNF Formulas. Parallel Problem Solving from Nature (PPSN) 2014: 942-951
We show that simple mutation-only evolutionary algorithms find a satisfying assignment on two similar models of random planted 3-CNF Boolean formulas in polynomial time with high probability in the high constraint density regime. We extend the analysis to random formulas conditioned on satisfiability (i.e., the so-called filtered distribution) and conclude that most high-density satisfiable formulas are easy for simple evolutionary algorithms. With this paper, we contribute the first rigorous study of randomized search heuristics from the evolutionary computation community on well-studied distributions of random satisfiability problems.