Doerr, Benjamin; Neumann, Frank; Sutton, Andrew M. Improved Runtime Bounds for the (1+1) EA on Random 3-CNF Formulas Based on Fitness-Distance CorrelationGenetic and Evolutionary Computation Conference (GECCO) 2015: 1415–1422
With this paper, we contribute to the theoretical understanding of randomized search heuristics by investigating their behavior on random 3-SAT instances. We improve the results for the (1+1) EA obtained by Sutton and Neumann [PPSN 2014, 942-951] in three ways. First, we reduce the upper bound by a linear factor and prove that the (1+1) EA obtains optimal solutions in time \(O(n \log n)\) with high probability on asymptotically almost all high-density satisfiable 3-CNF formulas. Second, we extend the range of densities for which this bound holds to satisfiable formulas of at least logarithmic density. Finally, we complement these mathematical results with numerical experiments that summarize the behavior of the (1+1) EA on formulas along the density spectrum, and suggest that the implicit constants hidden in our bounds are low. Our proofs are based on analyzing the run of the algorithm by establishing a fitness-distance correlation. This approach might be of independent interest and we are optimistic that it is useful for the analysis of randomized search heuristics in various other settings. To our knowledge, this is the first time that fitness-distance correlation is explicitly used to rigorously prove a performance statement for an evolutionary algorithm.
Friedrich, Tobias; Kötzing, Timo; Krejca, Martin S.; Sutton, Andrew M. Robustness of Ant Colony Optimization to NoiseGenetic and Evolutionary Computation Conference (GECCO) 2015: 17–24
Recently Ant Colony Optimization (ACO) algorithms have been proven to be efficient in uncertain environments, such as noisy or dynamically changing fitness functions. Most of these analyses focus on combinatorial problems, such as path finding. We analyze an ACO algorithm in a setting where we try to optimize the simple OneMax test function, but with additive posterior noise sampled from a Gaussian distribution. Without noise the classical \((\mu+1)\)-EA outperforms any ACO algorithm, with smaller \(\mu\) being better; however, with large noise, the \((\mu+1)\)-EA fails, even for high values of \(\mu\) (which are known to help against small noise). In this paper we show that ACO is able to deal with arbitrarily large noise in a graceful manner, that is, as long as the evaporation factor \(\mu\) is small enough dependent on the parameter \(\delta^2\) of the noise and the dimension \(n\) of the search space \((p = o(1/(n(n + \delta \log n)^2 \log n)))\), optimization will be successful.
Doerr, Benjamin; Doerr, Carola; Kötzing, Timo Solving Problems with Unknown Solution Length at (Almost) No Extra CostGenetic and Evolutionary Computation Conference (GECCO) 2015: 831–838
Most research in the theory of evolutionary computation assumes that the problem at hand has a fixed problem size. This assumption does not always apply to real-world optimization challenges, where the length of an optimal solution may be unknown a priori. Following up on previous work of Cathabard, Lehre, and Yao [FOGA 2011] we analyze variants of the (1+1) evolutionary algorithm for problems with unknown solution length. For their setting, in which the solution length is sampled from a geometric distribution, we provide mutation rates that yield an expected optimization time that is of the same order as that of the (1+1) EA knowing the solution length. We then show that almost the same run times can be achieved even if no a priori information on the solution length is available. Finally, we provide mutation rates suitable for settings in which neither the solution length nor the positions of the relevant bits are known. Again we obtain almost optimal run times for the OneMax and LeadingOnes test functions, thus solving an open problem from Cathabard et al.