Bazgan, Cristina; Brankovic, Ljiljana; Casel, Katrin; Fernau, Henning; Jansen, Klaus; Klein, Kim-Manuel; Lampis, Michael; Liedloff, Mathieu; Monnot, Jérôme; Paschos, Vangelis Th. Algorithmic Aspects of Upper Domination: A Parameterised PerspectiveAlgorithmic Aspects in Information and Management (AAIM) 2016: 113–124
This paper studies Upper Domination, i.e., the problem of computing the maximum cardinality of a minimal dominating set in a graph, with a focus on parameterised complexity. Our main results include W[1]-hardness for Upper Domination, contrasting FPT membership for the parameterised dual Co-Upper Domination. The study of structural properties also yields some insight into Upper Total Domination. We further consider graphs of bounded degree and derive upper and lower bounds for kernelisation.
Bazgan, Cristina; Brankovic, Ljiljana; Casel, Katrin; Fernau, Henning On the Complexity Landscape of the Domination ChainAlgorithms and Discrete Applied Mathematics (CALDAM) 2016: 61–72
In this paper, we survey and supplement the complexity landscape of the domination chain parameters as a whole, including classifications according to approximability and parameterised complexity. Moreover, we provide clear pointers to yet open questions. As this posed the majority of hitherto unsettled problems, we focus on Upper Irredundance and Lower Irredundance that correspond to finding the largest irredundant set and resp. the smallest maximal irredundant set. The problems are proved NP-hard even for planar cubic graphs. While Lower Irredundance is proved not \(c \log(n)\)-approximable in polynomial time unless NP \(\subseteq\) DTIME\((n^{\log \log n})\), no such result is known for Upper Irredundance. Their complementary versions are constant-factor approximable in polynomial time. All these four versions are APX-hard even on cubic graphs.
Arndt, Tobias; Hafner, Danijar; Kellermeier, Thomas; Krogmann, Simon; Razmjou, Armin; Krejca, Martin S.; Rothenberger, Ralf; Friedrich, Tobias Probabilistic Routing for On-Street Parking SearchEuropean Symposium on Algorithms (ESA) 2016: 6:1–6:13
An estimated \(30\%\) of urban traffic is caused by search for parking spots. Traffic could be reduced by suggesting effective routes leading along potential parking spots. In this paper, we formalize parking search as a probabilistic problem on a road graph and show that it is NP-complete. We explore heuristics that optimize for the driving duration and the walking distance to the destination. Routes are constrained to reach a certain probability threshold of finding a spot. Empirically estimated probabilities of successful parking attempts are provided by TomTom on a per-street basis. We release these probabilities as a dataset of about 80,000 roads covering the Berlin area. This allows to evaluate parking search algorithms on a real road network with realistic probabilities for the first time. However, for many other areas, parking probabilities are not openly available. Because they are effortful to collect, we propose an algorithm that relies on conventional road attributes only. Our experiments show that this algorithm comes close to the baseline by a factor of 1.3 in our cost measure. This leads to the conclusion that conventional road attributes may be sufficient to compute reasonably good parking search routes.
Baum, Moritz; Bläsius, Thomas; Gemsa, Andreas; Rutter, Ignaz; Wegner, Franziska Scalable Exact Visualization of Isocontours in Road Networks via Minimum-Link PathsEuropean Symposium on Algorithms (ESA) 2016: 7:1–7:18
Isocontours in road networks represent the area that is reachable from a source within a given resource limit. We study the problem of computing accurate isocontours in realistic, large-scale networks. We propose isocontours represented by polygons with minimum number of segments that separate reachable and unreachable components of the network. Since the resulting problem is not known to be solvable in polynomial time, we introduce several heuristics that run in (almost) linear time and are simple enough to be implemented in practice. A key ingredient is a new practical linear-time algorithm for minimum-link paths in simple polygons. Experiments in a challenging realistic setting show excellent performance of our algorithms in practice, computing near-optimal solutions in a few milliseconds on average, even for long ranges.
Bläsius, Thomas; Friedrich, Tobias; Krohmer, Anton Hyperbolic Random Graphs: Separators and TreewidthEuropean Symposium on Algorithms (ESA) 2016: 15:1–15:16
When designing and analyzing algorithms, one can obtain better and more realistic results for practical instances by assuming a certain probability distribution on the input. The worst-case run-time is then replaced by the expected run-time or by bounds that hold with high probability (whp), i.e., with probability \(1 - O(1/n)\), on the random input. Hyperbolic random graphs can be used to model complex real-world networks as they share many important properties such as a small diameter, a large clustering coefficient, and a power-law degree-distribution. Divide and conquer is an important algorithmic design principle that works particularly well if the instance admits small separators. We show that hyperbolic random graphs in fact have comparatively small separators. More precisely, we show that a hyperbolic random graph can be expected to have a balanced separator hierarchy with separators of size \(O(\sqrt{n^{(3-\beta)}})\), \(O(\log n)\), and \(O(1)\) if \(2 < \beta < 3\), \(\beta = 3\) and \(3 < \beta\), respectively (\(\beta\) is the power-law exponent). We infer that these graphs have whp a treewidth of \(O(\sqrt{n^{(3 - \beta)}})\), \(O(\log^{2}n)\), and \(O(\log n)\), respectively. For \(2 < \beta < 3\), this matches a known lower bound. For the more realistic (but harder to analyze) binomial model, we still prove a sublinear bound on the treewidth. To demonstrate the usefulness of our results, we apply them to obtain fast matching algorithms and an approximation scheme for Independent Set.
Bläsius, Thomas; Friedrich, Tobias; Krohmer, Anton; Laue, Sören Efficient Embedding of Scale-Free Graphs in the Hyperbolic PlaneEuropean Symposium on Algorithms (ESA) 2016: 16:1–16:18
EATCS Best Paper Award
Hyperbolic geometry appears to be intrinsic in many large real networks. We construct and implement a new maximum likelihood estimation algorithm that embeds scale-free graphs in the hyperbolic space. All previous approaches of similar embedding algorithms require a runtime of \(\Omega(n^{2})\). Our algorithm achieves quasilinear runtime, which makes it the first algorithm that can embed networks with hundreds of thousands of nodes in less than one hour. We demonstrate the performance of our algorithm on artificial and real networks. In all typical metrics like Log-likelihood and greedy routing our algorithm discovers embeddings that are very close to the ground truth.
Chauhan, Ankit; Friedrich, Tobias; Rothenberger, Ralf Greed is Good for Deterministic Scale-Free NetworksFoundations of Software Technology and Theoretical Computer Science (FSTTCS) 2016: 33:1–33:15
Large real-world networks typically follow a power-law degree distribution. To study such networks, numerous random graph models have been proposed. However, real-world networks are not drawn at random. Therefore, Brach, Cygan, Lacki, and Sankowski [SODA 2016] introduced two natural deterministic conditions: (1) a power-law upper bound on the degree distribution (PLB-U) and (2) power-law neighborhoods, that is, the degree distribution of neighbors of each vertex is also upper bounded by a power law (PLB-N). They showed that many real-world networks satisfy both deterministic properties and exploit them to design faster algorithms for a number of classical graph problems. We complement the work of Brach et al. by showing that some well-studied random graph models exhibit both the mentioned PLB properties and additionally also a power-law lower bound on the degree distribution (PLB-L). All three properties hold with high probability for Chung-Lu Random Graphs and Geometric Inhomogeneous Random Graphs and almost surely for Hyperbolic Random Graphs. As a consequence, all results of Brach et al. also hold with high probability or almost surely for those random graph classes. In the second part of this work we study three classical NP-hard combinatorial optimization problems on PLB networks. It is known that on general graphs with maximum degree \(\Delta\), a greedy algorithm, which chooses nodes in the order of their degree, only achieves a \(\Omega(\ln \Delta)\)-approximation for Minimum Vertex Cover and Minimum Dominating Set, and a \(\Omega(\Delta)\)-approximation for Maximum Independent Set. We prove that the PLB-U property suffices for the greedy approach to achieve a constant-factor approximation for all three problems. We also show that all three combinatorial optimization problems are APX-complete even if all PLB-properties holds hence, PTAS cannot be expected unless P=NP.
Friedrich, Tobias; Kötzing, Timo; Quinzan, Francesco; Sutton, Andrew M. Ant Colony Optimization Beats Resampling on Noisy FunctionsGenetic and Evolutionary Computation Conference (GECCO) 2016: 3–4
Despite the pervasiveness of noise in real-world optimization, there is little understanding of the interplay between the operators of randomized search heuristics and explicit noise-handling techniques such as statistical resampling. Ant Colony Optimization (ACO) algorithms are claimed to be particularly well-suited to dynamic and noisy problems, even without explicit noise-handling techniques. In this work, we empirically investigate the trade-offs between resampling an the noise-handling abilities of ACO algorithms. Our main focus is to locate the point where resampling costs more than it is worth.
Friedrich, Tobias; Kötzing, Timo; Krejca, Martin S.; Sutton, Andrew M. The Benefit of Recombination in Noisy Evolutionary SearchGenetic and Evolutionary Computation Conference (GECCO) 2016: 161–162
Practical optimization problems frequently include uncertainty about the quality measure, for example due to noisy evaluations. Thus, they do not allow for a straightforward application of traditional optimization techniques. In these settings, randomized search heuristics such as evolutionary algorithms are a popular choice because they are often assumed to exhibit some kind of resistance to noise. Empirical evidence suggests that some algorithms, such as estimation of distribution algorithms (EDAs) are robust against a scaling of the noise intensity, even without resorting to explicit noise-handling techniques such as resampling. In this paper, we want to support such claims with mathematical rigor. We introduce the concept of graceful scaling in which the run time of an algorithm scales polynomially with noise intensity. We study a monotone fitness function over binary strings with additive noise taken from a Gaussian distribution. We show that myopic heuristics cannot efficiently optimize the function under arbitrarily intense noise without any explicit noise-handling. Furthermore, we prove that using a population does not help. Finally we show that a simple EDA called the Compact Genetic Algorithm can overcome the shortsightedness of mutation-only heuristics to scale gracefully with noise. We conjecture that recombinative genetic algorithms also have this property.
Dang, Duc-Cuong; Friedrich, Tobias; Krejca, Martin S.; Kötzing, Timo; Lehre, Per Kristian; Oliveto, Pietro S.; Sudholt, Dirk; Sutton, Andrew Michael Escaping Local Optima with Diversity Mechanisms and CrossoverGenetic and Evolutionary Computation Conference (GECCO) 2016: 645–652
Population diversity is essential for the effective use of any crossover operator. We compare seven commonly used diversity mechanisms and prove rigorous run time bounds for the \((\mu+1)\) GA using uniform crossover on the fitness function \(Jump_k\). All previous results in this context only hold for unrealistically low crossover probability \(p_c=O(k/n)\), while we give analyses for the setting of constant \(p_c < 1\) in all but one case. Our bounds show a dependence on the problem size \(n\), the jump length \(k\), the population size \(\mu\), and the crossover probability \(p_c\). For the typical case of constant \(k > 2\) and constant \(p_c\), we can compare the resulting expected optimisation times for different diversity mechanisms assuming an optimal choice of \(\mu\): \(O(n^{k-1})\) for duplicate elimination/minimisation, \(O(n^2 \log n)\) for maximising the convex hull, \(O(n \log n)\) for det. crowding (assuming \(p_c = k/n\)), \(O(n \log n)\) for maximising the Hamming distance, \(O(n \log n)\) for fitness sharing, \(O(n \log n)\) for the single-receiver island model. This proves a sizeable advantage of all variants of the \((\mu+1)\) GA compared to the (1+1) EA, which requires \(\Theta(n^k)\). In a short empirical study we confirm that the asymptotic differences can also be observed experimentally.
Friedrich, Tobias; Kötzing, Timo; Krejca, Martin S.; Nallaperuma, Samadhi; Neumann, Frank; Schirneck, Martin Fast Building Block Assembly by Majority Vote CrossoverGenetic and Evolutionary Computation Conference (GECCO) 2016: 661–668
Different works have shown how crossover can help with building block assembly. Typically, crossover might get lucky to select good building blocks from each parent, but these lucky choices are usually rare. In this work we consider a crossover operator which works on three parent individuals. In each component, the offspring inherits the value present in the majority of the parents; thus, we call this crossover operator majority vote. We show that, if good components are sufficiently prevalent in the individuals, majority vote creates an optimal individual with high probability. Furthermore, we show that this process can be amplified: as long as components are good independently and with probability at least \(1/2+\delta\), we require only \(O(\log 1/\delta + \log \log n)\) successive stages of majority vote to create an optimal individual with high probability! We show how this applies in two scenarios. The first scenario is the Jump test function. With sufficient diversity, we get an optimization time of \(O(n \log n)\) even for jump sizes as large as \(O(n^{(1/2-\epsilon)})\). Our second scenario is a family of vertex cover instances. Majority vote optimizes this family efficiently, while local searches fail and only highly specialized two-parent crossovers are successful.
Doerr, Benjamin; Doerr, Carola; Kötzing, Timo The Right Mutation Strength for Multi-Valued Decision VariablesGenetic and Evolutionary Computation Conference (GECCO) 2016: 1115–1122
The most common representation in evolutionary computation are bit strings. This is ideal to model binary decision variables, but less useful for variables taking more values. With very little theoretical work existing on how to use evolutionary algorithms for such optimization problems, we study the run time of simple evolutionary algorithms on some OneMax-like functions defined over \(\Omega=\{0,1,\dots,r-1\}n\). More precisely, we regard a variety of problem classes requesting the component-wise minimization of the distance to an unknown target vector \(z \in \Omega\). For such problems we see a crucial difference in how we extend the standard-bit mutation operator to these multi-valued domains. While it is natural to select each position of the solution vector to be changed independently with probability \(1/n\), there are various ways to then change such a position. If we change each selected position to a random value different from the original one, we obtain an expected run time of \(\Theta(nr\log n)\). If we change each selected position by either +1 or -1 (random choice), the optimization time reduces to \(\Theta(nr+n\log n)\). If we use a random mutation strength \(i \in \{0,1,\dots,r-1\}n\) with probability inversely proportional to \(i\) and change the selected position by either +\(i\) or -\(i\) (random choice), then the optimization time becomes \(\Theta(n\log(r)(\log(n)+\log(r)))\), bringing down the dependence on \(r\) from linear to polylogarithmic. One of our results depends on a new variant of the lower bounding multiplicative drift theorem.
Friedrich, Tobias; Kötzing, Timo; Krejca, Martin S. EDAs cannot be Balanced and StableGenetic and Evolutionary Computation Conference (GECCO) 2016: 1139–1146
Estimation of Distribution Algorithms (EDAs) work by iteratively updating a distribution over the search space with the help of samples from each iteration. Up to now, theoretical analyses of EDAs are scarce and present run time results for specific EDAs. We propose a new framework for EDAs that captures the idea of several known optimizers, including PBIL, UMDA, \(\lambda\)-MMASIB, cGA, and \((1,\lambda)\)-EA. Our focus is on analyzing two core features of EDAs: a balanced EDA is sensitive to signals in the fitness; a stable EDA remains uncommitted under a biasless fitness function. We prove that no EDA can be both balanced and stable. The LeadingOnes function is a prime example where, at the beginning of the optimization, the fitness function shows no bias for many bits. Since many well-known EDAs are balanced and thus not stable, they are not well-suited to optimize LeadingOnes. We give a stable EDA which optimizes LeadingOnes within a time of \(O(n\,\log n)\).
Borndörfer, Ralf; Arslan, Oytun; Elijazyfer, Ziena; Güler, Hakan; Renken, Malte; Şahin, Güvenç; Schlechte, Thomas Line Planning on Path Networks with Application to the Istanbul MetrobüsGerman Operations Research Society (GOR) 2016: 235–241
Galanis, Andreas; Göbel, Andreas; Goldberg, Leslie-Ann; Lapinskas, John; Richerby, David Amplifiers for the Moran ProcessInternational Colloquium on Automata, Languages and Programming (ICALP) 2016: 62:1–62:13
The Moran process, as studied by Lieberman, Hauert and Nowak, is a randomised algorithm modelling the spread of genetic mutations in populations. The algorithm runs on an underlying graph where individuals correspond to vertices. Initially, one vertex (chosen uniformly at random) possesses a mutation, with fitness \(r > 1\). All other individuals have fitness 1. During each step of the algorithm, an individual is chosen with probability proportional to its fitness, and its state (mutant or non-mutant) is passed on to an out-neighbour which is chosen uniformly at random. If the underlying graph is strongly connected then the algorithm will eventually reach fixation, in which all individuals are mutants, or extinction, in which no individuals are mutants. An infinite family of directed graphs is said to be strongly amplifying if, for every \(r > 1\), the extinction probability tends to 0 as the number of vertices increases. Strong amplification is a rather surprising property - it means that in such graphs, the fixation probability of a uniformly-placed initial mutant tends to 1 even though the initial mutant only has a fixed selective advantage of \(r > 1\) (independently of \(n\)). The name "strongly amplifying" comes from the fact that this selective advantage is "amplified". Strong amplifiers have received quite a bit of attention, and Lieberman et al. proposed two potentially strongly-amplifying families - superstars and metafunnels. Heuristic arguments have been published, arguing that there are infinite families of superstars that are strongly amplifying. The same has been claimed for metafunnels. We give the first rigorous proof that there is an infinite family of directed graphs that is strongly amplifying. We call the graphs in the family "megastars". When the algorithm is run on an n-vertex graph in this family, starting with a uniformly-chosen mutant, the extinction probability is roughly \(n^{-1/2}\) (up to logarithmic factors). We prove that all infinite families of superstars and metafunnels have larger extinction probabilities (as a function of \(n\)). Finally, we prove that our analysis of megastars is fairly tight - there is no infinite family of megastars such that the Moran algorithm gives a smaller extinction probability (up to logarithmic factors). Also, we provide a counterexample which clarifies the literature concerning the isothermal theorem of Lieberman et al. A full version [Galanis/Göbel/Goldberg/Lapinskas/Richerby, Preprint] containing detailed proofs is available at http://arxiv.org/abs/1512.05632. Theorem-numbering here matches the full version.
Casel, Katrin; Fernau, Henning; Gaspers, Serge; Gras, Benjamin; Schmid, Markus L. On the Complexity of Grammar-Based Compression over Fixed AlphabetsInternational Colloquium on Automata, Languages, and Programming (ICALP) 2016: 122:1–122:14
It is shown that the shortest-grammar problem remains NP-complete if the alphabet is fixed and has a size of at least 24 (which settles an open question). On the other hand, this problem can be solved in polynomial-time, if the number of nonterminals is bounded, which is shown by encoding the problem as a problem on graphs with interval structure. Furthermore, we present an \(O(3^n)\) exact exponential-time algorithm, based on dynamic programming. Similar results are also given for 1-level grammars, i.e., grammars for which only the start rule contains nonterminals on the right side (thus, investigating the impact of the "hierarchical depth" on the complexity of the shortest-grammar problem).
Cseh, Ágnes; Kavitha, Telikepalli Popular Edges and Dominant MatchingsInternational Conference on Integer Programming and Combinatorial Optimization (IPCO) 2016: 138–151
Given a bipartite graph \(G=(A \cup B, E)\) with strict preference lists and given an edge \(e^* \in E\), we ask if there exists a popular matching in G that contains \(e^*\). We call this the popular edge problem. A matching \(M\) is popular if there is no matching \(M'\) such that the vertices that prefer \(M'\) to \(M\) outnumber those that prefer \(M\) to \(M'\). It is known that every stable matching is popular; however G may have no stable matching with the edge \(e^*\). In this paper we identify another natural subclass of popular matchings called “dominant matchings” and show that if there is a popular matching that contains the edge \(e^*\), then there is either a stable matching that contains \(e^*\) or a dominant matching that contains \(e^*\). This allows us to design a linear time algorithm for the popular edge problem. When preference lists are complete, we show an \(\mathcal{O(n^3)\) algorithm to find a popular matching containing a given set of edges or report that none exists, where \(n=|A|+|B|\).
Issac, Davis; Chandran, L. Sunil; Karrenbauer, Andreas On the Parameterized Complexity of Biclique Cover and PartitionInternational Symposium on Parameterized and Exact Computation (IPEC) 2016: 1–13
Given a bipartite graph G, we consider the decision problem called Biclique Cover for a fixed positive integer parameter k where we are asked whether the edges of G can be covered with at most k complete bipartite subgraphs (a.k.a. bicliques). In the Biclique Partition problem, we have the additional constraint that each edge should appear in exactly one of the k bicliques. These problems are both known to be NP-complete but fixed parameter tractable. However, the known FPT algorithms have a running time that is doubly exponential in k, and the best known kernel for both problems is exponential in k. We build on this kernel and improve the running time for Biclique partition to \(2^{O(k^2) \) by exploiting a linear algebraic view on this problem. On the other hand, we show that no such improvement is possible for Biclique Cover unless the Exponential Time Hypothesis (ETH) is false by proving a doubly exponential lower bound on the running time. We achieve this by giving a reduction from 3SAT on n variables to an instance of Biclique Cover with \( k=O(\log n) \). As a further consequence of this reduction, we show that there is no subexponential kernel for Biclique Cover unless \( P=NP \). Finally, we point out the significance of the exponential kernel mentioned above for the design of polynomial-time approximation algorithms for the optimization versions of both problems. That is, we show that it is possible to obtain approximation factors of \( n/\log n\) for both problems, whereas the previous best approximation factor was \(n/\sqrt(\log n)\).
Bläsius, Thomas; Friedrich, Tobias; Schirneck, Martin The Parameterized Complexity of Dependency Detection in Relational DatabasesInternational Symposium on Parameterized and Exact Computation (IPEC) 2016: 6:1–6:13
We study the parameterized complexity of classical problems that arise in the profiling of relational data. Namely, we characterize the complexity of detecting unique column combinations (candidate keys), functional dependencies, and inclusion dependencies with the solution size as parameter. While the discovery of uniques and functional dependencies, respectively, turns out to be W[2]-complete, the detection of inclusion dependencies is one of the first natural problems proven to be complete for the class W[3]. As a side effect, our reductions give insights into the complexity of enumerating all minimal unique column combinations or functional dependencies.
Abu-Khzam, Faisal N.; Bazgan, Cristina; Casel, Katrin; Fernau, Henning Building Clusters with Lower-Bounded SizesInternational Symposium on Algorithms and Computation (ISAAC) 2016: 4:1–4:13
Classical clustering problems search for a partition of objects into a fixed number of clusters. In many scenarios however the number of clusters is not known or necessarily fixed. Further, clusters are sometimes only considered to be of significance if they have a certain size. We discuss clustering into sets of minimum cardinality \(k\) without a fixed number of sets and present a general model for these types of problems. This general framework allows the comparison of different measures to assess the quality of a clustering. We specifically consider nine quality-measures and classify the complexity of the resulting problems with respect to \(k\). Further, we derive some polynomial-time solvable cases for \(k = 2\) with connections to matching-type problems which, among other graph problems, then are used to compute approximations for larger values of \(k\).
Bazgan, Cristina; Brankovic, Ljiljana; Casel, Katrin; Fernau, Henning; Jansen, Klaus; Klein, Kim-Manuel; Lampis, Michael; Liedloff, Mathieu; Monnot, Jérôme; Paschos, Vangelis Th. Upper Domination: Complexity and ApproximationInternational Workshop on Combinatorial Algorithms (IWOCA) 2016: 241–252
We consider Upper Domination, the problem of finding a maximum cardinality minimal dominating set in a graph. We show that this problem does not admit an \(n^{1-\epsilon }\) approximation for any \(\epsilon >0\), making it significantly harder than Dominating Set, while it remains hard even on severely restricted special cases, such as cubic graphs (APX-hard), and planar subcubic graphs (NP-hard). We complement our negative results by showing that the problem admits an \(O(\Delta )\) approximation on graphs of maximum degree \(\Delta\) , as well as an EPTAS on planar graphs. Along the way, we also derive essentially tight \(n^{1-\frac{1}{d}}\) upper and lower bounds on the approximability of the related problem Maximum Minimal Hitting Set on d-uniform hypergraphs, generalising known results for Maximum Minimal Vertex Cover.
Friedrich, Tobias Scale-Free Networks, Hyperbolic Geometry, and Efficient AlgorithmsSymposium on Mathematical Foundations of Computer Science (MFCS) 2016: 4:1–4:3
Invited Talk
The node degrees of large real-world networks often follow a power-law distribution. Such scale-free networks can be social networks, internet topologies, the web graph, power grids, or many other networks from literally hundreds of domains. The talk will introduce several mathematical models of scale-free networks (e.g. preferential attachment graphs, Chung-Lu graphs, hyperbolic random graphs) and analyze some of their properties (e.g. diameter, average distance, clustering). We then present several algorithms and distributed processes on and for these network models (e.g. rumor spreading, load balancing, de-anonymization, embedding) and discuss a number of open problems. The talk assumes no prior knowledge about scale-free networks, distributed computing or hyperbolic geometry.
Gao, Wanru; Friedrich, Tobias; Neumann, Frank Fixed-Parameter Single Objective Search Heuristics for Minimum Vertex CoverParallel Problem Solving From Nature (PPSN) 2016: 740–750
We consider how well-known branching approaches for the classical minimum vertex cover problem can be turned into randomized initialization strategies with provable performance guarantees and investigate them by experimental investigations. Furthermore, we show how these techniques can be built into local search components and analyze a basic local search variant that is similar to a state-of-the-art approach called NuMVC. Our experimental results for the two local search approaches show that making use of more complex branching strategies in the local search component can lead to better results on various benchmark graphs.
Friedrich, Tobias; Kötzing, Timo; Krejca, Martin S.; Sutton, Andrew M. Graceful Scaling on Uniform versus Steep-Tailed NoiseParallel Problem Solving From Nature (PPSN) 2016: 761–770
Recently, different evolutionary algorithms (EAs) have been analyzed in noisy environments. The most frequently used noise model for this was additive posterior noise (noise added after the fitness evaluation) taken from a Gaussian distribution. In particular, for this setting it was shown that the \((\mu + 1)\)-EA on OneMax does not scale gracefully (higher noise cannot efficiently be compensated by higher \(\mu\)). In this paper we want to understand whether there is anything special about the Gaussian distribution which makes the \((\mu + 1)\)-EA not scale gracefully. We keep the setting of posterior noise, but we look at other distributions. We see that for exponential tails the \((\mu + 1)\)-EA on OneMax does also not scale gracefully, for similar reasons as in the case of Gaussian noise. On the other hand, for uniform distributions (as well as other, similar distributions) we see that the \((\mu + 1)\)-EA on OneMax does scale gracefully, indicating the importance of the noise model.
Friedrich, Tobias; Kötzing, Timo; Sutton, Andrew M. On the Robustness of Evolving PopulationsParallel Problem Solving From Nature (PPSN) 2016: 771–781
Most theoretical work that studies the benefit of recombination focuses on the ability of crossover to speed up optimization time on specific search problems. In this paper, we take a slightly different perspective and investigate recombination in the context of evolving solutions that exhibit \(\emph{mutational}\) robustness, i.e., they display insensitivity to small perturbations. Various models in population genetics have demonstrated that increasing the effective recombination rate promotes the evolution of robustness. We show this result also holds in the context of evolutionary computation by proving crossover promotes the evolution of robust solutions in the standard \((\mu+1)\) GA. Surprisingly, our results show that the effect is present even when robust solutions are at a selective disadvantage due to lower fitness values.
Doerr, Benjamin; Doerr, Carola; Kötzing, Timo Provably Optimal Self-Adjusting Step Sizes for Multi-Valued Decision VariablesParallel Problem Solving From Nature (PPSN) 2016: 782–791
We regard the problem of maximizing a OneMax-like function defined over an alphabet of size \(r\). In previous work [GECCO 2016] we have investigated how three different mutation operators influence the performance of Randomized Local Search (RLS) and the (1+1) Evolutionary Algorithm. This work revealed that among these natural mutation operators none is superior to the other two for any choice of \(r\). We have also given in [GECCO 2016] some indication that the best achievable run time for large \(r\) is \(\Theta(n log r(\log n + \log r))\), regardless of how the mutation operator is chosen, as long as it is a static choice (i.e., the distribution used for variation of the current individual does not change over time). Here in this work we show that we can achieve a better performance if we allow for adaptive mutation operators. More precisely, we analyze the performance of RLS using a self-adjusting mutation strength. In this algorithm the size of the steps taken in each iteration depends on the success of previous iterations. That is, the mutation strength is increased after a successful iteration and it is decreased otherwise. We show that this idea yields an expected optimization time of \(\Theta(n(\log n + \log r))\), which is optimal among all comparison-based search heuristics. This is the first time that self-adjusting parameter choices are shown to outperform static choices on a discrete multi-valued optimization problem.
Dang, Duc-Cuong; Lehre, Per Kristian; Friedrich, Tobias; Kötzing, Timo; Krejca, Martin S.; Oliveto, Pietro S.; Sudholt, Dirk; Sutton, Andrew M. Emergence of Diversity and its Benefits for Crossover in Genetic AlgorithmsParallel Problem Solving From Nature (PPSN) 2016: 890–900
Population diversity is essential for avoiding premature convergence in Genetic Algorithms (GAs) and for the effective use of crossover. Yet the dynamics of how diversity emerges in populations are not well understood. We use rigorous runtime analysis to gain insight into population dynamics and GA performance for a standard \((\mu+1)\) GA and the \(Jump_k\) test function. By studying the stochastic process underlying the size of the largest collection of identical genotypes we show that the interplay of crossover followed by mutation may serve as a catalyst leading to a sudden burst of diversity. This leads to improvements of the expected optimisation time of order \(\Omega(n/ \log n)\) compared to mutation-only algorithms like the \((1+1)\) EA.
Chauhan, Ankit; Lenzner, Pascal; Melnichenko, Anna; Münn, Martin On Selfish Creation of Robust NetworksSymposium on Algorithmic Game Theory (SAGT) 2016: 141–152
Robustness is one of the key properties of nowadays networks. However, robustness cannot be simply enforced by design or regulation since many important networks, most prominently the Internet, are not created and controlled by a central authority. Instead, Internet-like networks emerge from strategic decisions of many selfish agents. Interestingly, although lacking a coordinating authority, such naturally grown networks are surprisingly robust while at the same time having desirable properties like a small diameter. To investigate this phenomenon we present the first simple model for selfish network creation which explicitly incorporates agents striving for a central position in the network while at the same time protecting themselves against random edge-failure. We show that networks in our model are diverse and we prove the versatility of our model by adapting various properties and techniques from the non-robust versions which we then use for establishing bounds on the Price of Anarchy. Moreover, we analyze the computational hardness of finding best possible strategies and investigate the game dynamics of our model.
Cseh, Ágnes; Irving, Robert W.; Manlove, David F. The Stable Roommates Problem with Short ListsSymposium Algorithmic Game Theory (SAGT) 2016: 207–219
We consider two variants of the classical Stable Roommates problem with Incomplete (but strictly ordered) preference lists (sri) that are degree constrained, i.e., preference lists are of bounded length. The first variant, egal d-sri, involves finding an egalitarian stable matching in solvable instances of sri with preference lists of length at most d. We show that this problem is NP-hard even if \($d=3$\). On the positive side we give a \(\frac{2d+3}{7}\)-approximation algorithm for \(d \in\{3,4,5\}\) which improves on the known bound of 2 for the unbounded preference list case. In the second variant of sri, called d-srti, preference lists can include ties and are of length at most d. We show that the problem of deciding whether an instance of d-srti admits a stable matching is NP-complete even if \($d=3$\). We also consider the “most stable” version of this problem and prove a strong inapproximability bound for the \($d=3$\) case. However for \($d=2$\) we show that the latter problem can be solved in polynomial time.
Kötzing, Timo; Schirneck, Martin Towards an Atlas of Computational Learning TheorySymposium on Theoretical Aspects of Computer Science (STACS) 2016: 47:1–47:13
A major part of our knowledge about Computational Learning stems from comparisons of the learning power of different learning criteria. These comparisons inform about trade-offs between learning restrictions and, more generally, learning settings; furthermore, they inform about what restrictions can be observed without losing learning power. With this paper we propose that one main focus of future research in Computational Learning should be on a structured approach to determine the relations of different learning criteria. In particular, we propose that, for small sets of learning criteria, all pairwise relations should be determined; these relations can then be easily depicted as a map, a diagram detailing the relations. Once we have maps for many relevant sets of learning criteria, the collection of these maps is an Atlas of Computational Learning Theory, informing at a glance about the landscape of computational learning just as a geographical atlas informs about the earth. In this paper we work toward this goal by providing three example maps, one pertaining to partially set-driven learning, and two pertaining to strongly monotone learning. These maps can serve as blueprints for future maps of similar base structure.