Clean Citation Style 002
{ "authors" : [{ "lastname":"Bläsius" , "initial":"T" , "url":"https://hpi.de/friedrich/publications/people/thomas-blaesius.html" , "mail":"thomas.blasius(at)hpi.de" }, { "lastname":"Casel" , "initial":"K" , "url":"https://hpi.de/friedrich/publications/people/katrin-casel.html" , "mail":"katrin.casel(at)hpi.de" }, { "lastname":"Chauhan" , "initial":"A" , "url":"https://hpi.de/friedrich/publications/people/ankit-chauhan.html" , "mail":"ankit.chauhan(at)hpi.de" }, { "lastname":"Cohen" , "initial":"S" , "url":"https://hpi.de/friedrich/publications/people/sarel-cohen.html" , "mail":"sarel.cohen(at)hpi.de" }, { "lastname":"Cseh" , "initial":"" , "url":"https://hpi.de/friedrich/publications/people/agnes-cseh.html" , "mail":"agnes.cseh(at)hpi.de" }, { "lastname":"Doskoč" , "initial":"V" , "url":"https://hpi.de/friedrich/publications/people/vanja-doskoc.html" , "mail":"vanja.doskoc(at)hpi.de" }, { "lastname":"Elijazyfer" , "initial":"Z" , "url":"https://hpi.de/friedrich/people/ziena-elijazyfer.html" , "mail":"ziena.elijazyfer(at)hpi.de" }, { "lastname":"Fischbeck" , "initial":"P" , "url":"https://hpi.de/friedrich/publications/people/philipp-fischbeck.html" , "mail":"philipp.fischbeck(at)hpi.de" }, { "lastname":"Friedrich" , "initial":"T" , "url":"https://hpi.de/friedrich/publications/people/tobias-friedrich.html" , "mail":"friedrich(at)hpi.de" }, { "lastname":"Göbel" , "initial":"A" , "url":"https://hpi.de/friedrich/publications/people/andreas-goebel.html" , "mail":"andreas.goebel(at)hpi.de" }, { "lastname":"Issac" , "initial":"D" , "url":"https://hpi.de/friedrich/publications/people/davis-issac.html" , "mail":"davis.issac(at)hpi.de" }, { "lastname":"Katzmann" , "initial":"M" , "url":"https://hpi.de/friedrich/publications/people/maximilian-katzmann.html" , "mail":"maximilian.katzmann(at)hpi.de" }, { "lastname":"Khazraei" , "initial":"A" , "url":"https://hpi.de/friedrich/publications/people/ardalan-khazraei.html" , "mail":"ardalan.khazraei(at)hpi.de" }, { "lastname":"Kötzing" , "initial":"T" , "url":"https://hpi.de/friedrich/publications/people/timo-koetzing.html" , "mail":"timo.koetzing(at)hpi.de" }, { "lastname":"Krejca" , "initial":"M" , "url":"https://hpi.de/friedrich/publications/people/martin-krejca.html" , "mail":"martin.krejca(at)hpi.de" }, { "lastname":"Krogmann" , "initial":"S" , "url":"https://hpi.de/friedrich/publications/people/simon-krogmann.html" , "mail":"simon.krogmann(at)hpi.de" }, { "lastname":"Krohmer" , "initial":"A" , "url":"https://hpi.de/friedrich/publications/people/anton-krohmer.html" , "mail":"none" }, { "lastname":"Kumar" , "initial":"N" , "url":"https://hpi.de/friedrich/publications/people/nikhil-kumar.html" , "mail":"nikhil.kumar(at)hpi.de" }, { "lastname":"Lagodzinski" , "initial":"G" , "url":"https://hpi.de/friedrich/publications/people/gregor-lagodzinski.html" , "mail":"gregor.lagodzinski(at)hpi.de" }, { "lastname":"Lenzner" , "initial":"P" , "url":"https://hpi.de/friedrich/publications/people/pascal-lenzner.html" , "mail":"pascal.lenzner(at)hpi.de" }, { "lastname":"Melnichenko" , "initial":"A" , "url":"https://hpi.de/friedrich/publications/people/anna-melnichenko.html" , "mail":"anna.melnichenko(at)hpi.de" }, { "lastname":"Molitor" , "initial":"L" , "url":"https://hpi.de/friedrich/publications/people/louise-molitor.html" , "mail":"louise.molitor(at)hpi.de" }, { "lastname":"Neubert" , "initial":"S" , "url":"https://hpi.de/friedrich/publications/people/stefan-neubert.html" , "mail":"stefan.neubert(at)hpi.de" }, { "lastname":"Pappik" , "initial":"M" , "url":"https://hpi.de/friedrich/publications/people/marcus-pappik.html" , "mail":"marcus.pappik(at)hpi.de" }, { "lastname":"Quinzan" , "initial":"F" , "url":"https://hpi.de/friedrich/publications/people/francesco-quinzan.html" , "mail":"francesco.quinzan(at)hpi.de" }, { "lastname":"Rizzo" , "initial":"M" , "url":"https://hpi.de/friedrich/publications/people/manuel-rizzo.html" , "mail":"manuel.rizzo(at)hpi.de" }, { "lastname":"Rothenberger" , "initial":"R" , "url":"https://hpi.de/friedrich/publications/people/ralf-rothenberger.html" , "mail":"ralf.rothenberger(at)hpi.de" }, { "lastname":"Schirneck" , "initial":"M" , "url":"https://hpi.de/friedrich/publications/people/martin-schirneck.html" , "mail":"martin.schirneck(at)hpi.de" }, { "lastname":"Seidel" , "initial":"K" , "url":"https://hpi.de/friedrich/publications/people/karen-seidel.html" , "mail":"karen.seidel(at)hpi.de" }, { "lastname":"Sutton" , "initial":"A" , "url":"https://hpi.de/friedrich/publications/people/andrew-m-sutton.html" , "mail":"none" }, { "lastname":"Weyand" , "initial":"C" , "url":"https://hpi.de/friedrich/publications/people/christopher-weyand.html" , "mail":"none" }]}
Friedrich, Tobias; Quinzan, Francesco; Wagner, MarkusEscaping Large Deceptive Basins of Attraction with Heavy Mutation Operators. Genetic and Evolutionary Computation Conference (GECCO) 2018: 293-300
In many Evolutionary Algorithms (EAs), a parameter that needs to be tuned is that of the mutation rate, which determines the probability for each decision variable to be mutated. Typically, this rate is set to 1/n for the duration of the optimization, where n is the number of decision variables. This setting has the appeal that the expected number of mutated variables per iteration is one. In a recent theoretical study, it was proposed to sample the number of mutated variables from a power-law distribution. This results into a significantly higher probability on larger numbers of mutations, so that escaping local optima becomes more probable. In this paper, we propose another class of non-uniform mutation rates. We study the benefits of this operator in terms of average-case black-box complexity analysis and experimental comparison. We consider both pseudo-Boolean artificial landscapes and combinatorial problems (the Minimum Vertex Cover and the Maximum Cut). We observe that our non-uniform mutation rates significantly outperform the standard choices, when dealing with landscapes that exhibit large deceptive basins of attraction.
Friedrich, Tobias; Kötzing, Timo; Quinzan, Francesco; Sutton, Andrew M.Improving the Run Time of the (1+1) Evolutionary Algorithm with Luby Sequences. Genetic and Evolutionary Computation Conference (GECCO) 2018: 301-308
In the context of black box optimization, the most common way to handle deceptive attractors is to periodically restart the algorithm. In this paper, we explore the benefits of combining the simple \((1+1)\) Evolutionary Algorithm (EA) with the Luby Universal Strategy - the \((1+1)~EA_{\mathcal{U}}\), a meta-heuristic that does not require parameter tuning. We first consider two artificial pseudo-Boolean landscapes, on which the \((1+1)~EA\) exhibits exponential run time. We prove that the \((1+1)~EA_{\mathcal{U}}\) has polynomial run time on both instances. We then consider the Minimum Vertex Cover on two classes of graphs. Again, the \((1+1)~EA\) yields exponential run time on those instances, and the \((1+1)~EA_{\mathcal{U}}\) finds the global optimum in polynomial time. We conclude by studying the Makespan Scheduling. We consider an instance on which the \((1+1)~EA\) does not find a \((4/3-\epsilon)\)-approximation in polynomial time, and we show that the \((1+1)~EA_{\mathcal{U}}\) reaches a \((4/3-\epsilon)\)-approximation in polynomial time. We then prove that the \((1+1)~EA_{\mathcal{U}}\) serves as an Efficient Polynomial-time Approximation Scheme (EPTAS) for the Partition Problem, for a \((1+\epsilon)\)-approximation with \(\epsilon > 4/n\).
Friedrich, Tobias; Göbel, Andreas; Quinzan, Francesco; Wagner, MarkusHeavy-tailed Mutation Operators in Single-Objective Combinatorial Optimization. Parallel Problem Solving From Nature (PPSN) 2018: 134-145
A core feature of evolutionary algorithms is their mutation operator. Recently, much attention has been devoted to the study of mutation operators with dynamic and non-uniform mutation rates. Following up on this line of work, we propose a new mutation operator and analyze its performance on the (1+1) Evolutionary Algorithm (EA). Our analyses show that this mutation operator competes with pre-existing ones, when used by the (1+1)EA on classes of problems for which results on the other mutation operators are available. We present a jump function for which the performance of the (1+1)EA using any static uniform mutation and any restart strategy can be worse than the performance of the (1+1)EA using our mutation operator with no restarts. We show that the (1+1)EA using our mutation operator finds a (1/3)-approximation ratio on any non-negative submodular function in polynomial time. This performance matches that of combinatorial local search algorithms specifically designed to solve this problem. Finally, we evaluate experimentally the performance of the (1+1)EA using our operator, on real-world graphs of different origins with up to \(\sim\)37,000 vertices and \(\sim\)1.6 million edges. In comparison with uniform mutation and a recently proposed dynamic scheme our operator comes out on top on these instances.
Friedrich, Tobias; Kötzing, Timo; Quinzan, Francesco; Sutton, Andrew MichaelResampling vs Recombination: a Statistical Run Time Estimation. Foundations of Genetic Algorithms (FOGA) 2017: 25-35
Noise is pervasive in real-world optimization, but there is still little understanding of the interplay between the operators of randomized search heuristics and explicit noise-handling techniques, such as statistical resampling. In this paper, we report on several statistical models and theoretical results that help to clarify this reciprocal relationship for a collection of randomized search heuristics on noisy functions. We consider the optimization of pseudo-Boolean functions under additive posterior Gaussian noise and explore the trade-o between noise reduction and the computational cost of resampling. We first perform experiments to find the optimal parameters at a given noise intensity for a mutation-only evolutionary algorithm, a genetic algorithm employing recombination, an estimation of distribution algorithm (EDA), and an ant colony optimization algorithm. We then observe how the optimal parameter depends on the noise intensity for the different algorithms. Finally, we locate the point where statistical resampling costs more than it is worth in terms of run time. We find that the EA requires the highest number of resamples to obtain the best speed-up, whereas crossover reduces both the run time and the number of resamples required. Most surprisingly, we find that EDA-like algorithms require no resampling, and can handle noise implicitly.
Chauhan, Ankit; Friedrich, Tobias; Quinzan, FrancescoApproximating Optimization Problems using EAs on Scale-Free Networks. Genetic and Evolutionary Computation Conference (GECCO) 2017: 235-242
It has been experimentally observed that real-world networks follow certain topologicalproperties, such as small-world, power-law etc. To study these networks, many random graph models, such as Preferential Attachment, have been proposed. In this paper, we consider the deterministic properties which capture power-law degree distribution and degeneracy. Networks with these properties are known as scale-free networks in the literature. Many interesting problems remain NP-hard on scale-free networks. We study the relationship between scale-free properties and the approximation-ratio of some commonly used evolutionary algorithms. For the Vertex Cover, we observe experimentally that the \((1+1)\) EA always gives the better result than a greedy local search, even when it runs for only \(O(n, \log(n))\) steps. We give the construction of a scale-free network in which a multi-objective algorithm and a greedy algorithm obtain optimal solutions, while the \((1+1)\) EA obtains the worst possible solution with constant probability. We prove that for the Dominating Set, Vertex Cover, Connected Dominating Set and Independent Set, the \((1+1)\) EA obtains constant-factor approximation in expected run time \(O(n, \log(n))\) and \(O(n^4)\) respectively. Whereas, GSEMO gives even better approximation than \((1+1)\) EA in expected run time \(O(n^3)\) for Dominating Set, Vertex Cover and Connected Dominating Set on such networks.