Friedrich, Tobias; Kötzing, Timo; Wagner, Markus A Generic Bet-and-Run Strategy for Speeding Up Stochastic Local SearchConference on Artificial Intelligence (AAAI) 2017: 801–807
A common strategy for improving optimization algorithms is to restart the algorithm when it is believed to be trapped in an inferior part of the search space. However, while specific restart strategies have been developed for specific problems (and specific algorithms), restarts are typically not regarded as a general tool to speed up an optimization algorithm. In fact, many optimization algorithms do not employ restarts at all. Recently, "bet-and-run" was introduced in the context of mixed-integer programming, where first a number of short runs with randomized initial conditions is made, and then the most promising run of these is continued. In this article, we consider two classical NP-complete combinatorial optimization problems, traveling salesperson and minimum vertex cover, and study the effectiveness of different bet-and-run strategies. In particular, our restart strategies do not take any problem knowledge into account, nor are tailored to the optimization algorithm. Therefore, they can be used off-the-shelf. We observe that state-of-the-art solvers for these problems can benefit significantly from restarts on standard benchmark instances.
Katzmann, Maximilian; Komusiewicz, Christian Systematic Exploration of Larger Local Search Neighborhoods for the Minimum Vertex Cover ProblemConference on Artificial Intelligence (AAAI) 2017: 846–852
We investigate the potential of exhaustively exploring larger neighborhoods in local search algorithms for Minimum Vertex Cover. More precisely, we study whether, for moderate values of \(k\), it is feasible and worthwhile to determine, given a graph \(G\) with vertex cover \(C\), if there is a \(k\)-swap \(S\) such that \((C \setminus S) cup (S \setminus C)\) is a smaller vertex cover of \(G\). First, we describe an algorithm running in \(\Delta^{O(k) \cdot n\) time for searching the \(k\)-swap neighborhood on \(n\)-vertex graphs with maximum degree \(\Delta\). Then, we demonstrate that, by devising additional pruning rules that decrease the size of the search space, this algorithm can be implemented so that it solves the problem quickly for \(k \approx 20\). Finally, we show that it is worthwhile to consider moderately-sized \(k\)-swap neighborhoods. For our benchmark data set, we show that when combining our algorithm with a hill-climbing approach, the solution quality improves quickly with the radius \(k\) of the local search neighborhood and that in most cases optimal solutions can be found by setting \(k = 21\).
Friedrich, Tobias; Krohmer, Anton; Rothenberger, Ralf; Sutton, Andrew M. Phase Transitions for Scale-Free SAT FormulasConference on Artificial Intelligence (AAAI) 2017: 3893–3899
Recently, a number of non-uniform random satisfiability models have been proposed that are closer to practical satisfiability problems in some characteristics. In contrast to uniform random Boolean formulas, scale-free formulas have a variable occurrence distribution that follows a power law. It has been conjectured that such a distribution is a more accurate model for some industrial instances than the uniform random model. Though it seems that there is already an awareness of a threshold phenomenon in such models, there is still a complete picture lacking. In contrast to the uniform model, the critical density threshold does not lie at a single point, but instead exhibits a functional dependency on the power-law exponent. For scale-free formulas with clauses of length \(k = 2\), we give a lower bound on the phase transition threshold as a function of the scaling parameter. We also perform computational studies that suggest our bound is tight and investigate the critical density for formulas with higher clause lengths. Similar to the uniform model, on formulas with \(k \geq 3\), we find that the phase transition regime corresponds to a set of formulas that are difficult to solve by backtracking search.
Friedrich, Tobias; Neumann, Frank What’s Hot in Evolutionary ComputationConference on Artificial Intelligence (AAAI) 2017: 5064–5066
We provide a brief overview on some hot topics in the area of evolutionary computation. Our main focus is on recent developments in the areas of combinatorial optimization and real-world applications. Furthermore, we highlight recent progress on the theoretical understanding of evolutionary computing methods.
Hölzl, Rupert; Jain, Sanjay; Schlicht, Philipp; Seidel, Karen; Stephan, Frank Automatic Learning from Repetitive TextsAlgorithmic Learning Theory (ALT) 2017: 129–150
We study the connections between the learnability of automatic families of languages and the types of text used to present them to a learner. More precisely, we study how restrictions on the number of times that a correct datum appears in a text influence what classes of languages are automatically learnable. We show that an automatic family of languages is automatically learnable from fat text iff it is automatically learnable from thick text iff it is verifiable from balanced text iff it satisfies Angluin's tell-tale condition. Furthermore, many automatic families are automatically learnable from exponential text. We also study the relationship between automatic learnability and verifiability and show that all automatic families are automatically partially verifiable from exponential text and automatically learnable from thick text.
Kötzing, Timo; Schirneck, Martin; Seidel, Karen Normal Forms in Semantic Language IdentificationAlgorithmic Learning Theory (ALT) 2017: 493–516
We consider language learning in the limit from text where all learning restrictions are semantic, that is, where any conjecture may be replaced by a semantically equivalent conjecture. For different such learning criteria, starting with the well-known TxtGBc-learning, we consider three different normal forms: strongly locking learning, consistent learning and (partially) set-driven learning. These normal forms support and simplify proofs and give insight into what behaviors are necessary for successful learning (for example when consistency in conservative learning implies cautiousness and strong decisiveness). We show that strongly locking learning can be assumed for partially set-driven learners, even when learning restrictions apply. We give a very general proof relying only on a natural property of the learning restriction, namely, allowing for simulation on equivalent text. Furthermore, when no restrictions apply, also the converse is true: every strongly locking learner can be made partially set-driven. For several semantic learning criteria we show that learning can be done consistently. Finally, we deduce for which learning restrictions partial set-drivenness and set-drivenness coincide, including a general statement about classes of infinite languages. The latter again relies on a simulation argument.
Gao, Wanru; Friedrich, Tobias; Kötzing, Timo; Neumann, Frank Scaling up Local Search for Minimum Vertex Cover in Large Graphs by Parallel KernelizationAustralasian Conference on Artificial Intelligence (AUSAI) 2017: 131–143
We investigate how well-performing local search algorithms for small or medium size instances can be scaled up to perform well for large inputs. We introduce a parallel kernelization technique that is motivated by the assumption that graphs in medium to large scale are composed of components which are on their own easy for state-of-the-art solvers but when hidden in large graphs are hard to solve. To show the effectiveness of our kernelization technique, we consider the well-known minimum vertex cover problem and two state-of-the-art solvers called NuMVC and FastVC. Our kernelization approach reduces an existing large problem instance significantly and produces better quality results on a wide range of benchmark instances and real world graphs.
Wagner, Markus; Friedrich, Tobias; Lindauer, Marius Improving local search in a minimum vertex cover solver for classes of networksCongress on Evolutionary Computation (CEC) 2017: 1704–1711
For the minimum vertex cover problem, a wide range of solvers has been proposed over the years. Most classical exact approaches are encountering run time issues on massive graphs that are considered nowadays. A straightforward alternative approach is then to use heuristics, which make assumptions about the structure of the studied graphs. These assumptions are typically hard-coded and are hoped to work well for a wide range of networks - which is in conflict with the nature of broad benchmark sets. With this article, we contribute in two ways. First, we identify a component in an existing solver that influences its performance depending on the class of graphs, and we then customize instances of this solver for different classes of graphs. Second, we create the first algorithm portfolio for the minimum vertex cover to further improve the performance of a single integrated approach to the minimum vertex cover problem.
Kovacs, Robert; Seufert, Anna; Wall, Ludwig; Chen, Hsiang-Ting; Meinel, Florian; Müller, Willi; You, Sijing; Brehm, Maximilian; Striebel, Jonathan; Kommana, Yannis; Popiak, Alexander; Bläsius, Thomas; Baudisch, Patrick TrussFab: Fabricating Sturdy Large-Scale Structures on Desktop 3D PrintersHuman Factors in Computing Systems (CHI) 2017: 2606–2616
We present TrussFab, an integrated end-to-end system that allows users to fabricate large scale structures that are sturdy enough to carry human weight. TrussFab achieves the large scale by complementing 3D print with plastic bottles. It does not use these bottles as "bricks" though, but as beams that form structurally sound node-link structures, also known as trusses, allowing it to handle the forces resulting from scale and load. TrussFab embodies the required engineering knowledge, allowing non-engineers to design such structures and to validate their design using integrated structural analysis. We have used TrussFab to design and fabricate tables and chairs, a 2.5 m long bridge strong enough to carry a human, a functional boat that seats two, and a 5 m diameter dome.
Kirsch, Louis; Riekenbrauck, Niklas; Thevessen, Daniel; Pappik, Marcus; Stebner, Axel; Kunze, Julius; Meissner, Alexander; Kumar Shekar, Arvind; Müller, Emmanuel Framework for Exploring and Understanding Multivariate CorrelationsMachine Learning and Knowledge Discovery in Databases (ECML/PKDD) 2017: 404–408
Feature selection is an essential step to identify relevant and non-redundant features for target class prediction. In this context, the number of feature combinations grows exponentially with the dimension of the feature space. This hinders the user’s understanding of the feature-target relevance and feature-feature redundancy. We propose an interactive Framework for Exploring and Understanding Multivariate Correlations (FEXUM), which embeds these correlations using a force-directed graph. In contrast to existing work, our framework allows the user to explore the correlated feature space and guides in understanding multivariate correlations through interactive visualizations.
Friedrich, Tobias; Krohmer, Anton; Rothenberger, Ralf; Sauerwald, Thomas; Sutton, Andrew M. Bounds on the Satisfiability Threshold for Power Law Distributed Random SATEuropean Symposium on Algorithms (ESA) 2017: 37:1–37:15
Propositional satisfiability (SAT) is one of the most fundamental problems in computer science. The worst-case hardness of SAT lies at the core of computational complexity theory. The average-case analysis of SAT has triggered the development of sophisticated rigorous and non-rigorous techniques for analyzing random structures. Despite a long line of research and substantial progress, nearly all theoretical work on random SAT assumes a uniform distribution on the variables. In contrast, real-world instances often exhibit large fluctuations in variable occurrence. This can be modeled by a scale-free distribution of the variables, which results in distributions closer to industrial SAT instances. We study random \(k\)-SAT on \(n\) variables, \(m=\Theta(n)\) clauses, and a power law distribution on the variable occurrences with exponent \(\beta\). We observe a satisfiability threshold at \(\beta=(2k-1)/(k-1)\). This threshold is tight in the sense that instances with \( \beta\) \(< (2k-1)/(k-1)-\varepsilon\) for any constant \(\varepsilon>0\) are unsatisfiable with high probability (w.h.p.). For \(\beta\ge(2k-1)/(k-1)+\varepsilon\), the picture is reminiscent of the uniform case: instances are satisfiable w.h.p. for sufficiently small constant clause-variable ratios \(m/n\); they are unsatisfiable above a ratio \(m/n\) that depends on \(\beta\).
Friedrich, Tobias; Kötzing, Timo; Quinzan, Francesco; Sutton, Andrew Michael Resampling vs Recombination: a Statistical Run Time EstimationFoundations of Genetic Algorithms (FOGA) 2017: 25–35
Noise is pervasive in real-world optimization, but there is still little understanding of the interplay between the operators of randomized search heuristics and explicit noise-handling techniques, such as statistical resampling. In this paper, we report on several statistical models and theoretical results that help to clarify this reciprocal relationship for a collection of randomized search heuristics on noisy functions. We consider the optimization of pseudo-Boolean functions under additive posterior Gaussian noise and explore the trade-o between noise reduction and the computational cost of resampling. We first perform experiments to find the optimal parameters at a given noise intensity for a mutation-only evolutionary algorithm, a genetic algorithm employing recombination, an estimation of distribution algorithm (EDA), and an ant colony optimization algorithm. We then observe how the optimal parameter depends on the noise intensity for the different algorithms. Finally, we locate the point where statistical resampling costs more than it is worth in terms of run time. We find that the EA requires the highest number of resamples to obtain the best speed-up, whereas crossover reduces both the run time and the number of resamples required. Most surprisingly, we find that EDA-like algorithms require no resampling, and can handle noise implicitly.
Pourhassan, Mojgan; Friedrich, Tobias; Neumann, Frank On the Use of the Dual Formulation for Minimum Weighted Vertex Cover in Evolutionary AlgorithmsFoundations of Genetic Algorithms (FOGA) 2017: 37–44
We consider the weighted minimum vertex cover problem and investigate how its dual formulation can be exploited to design evolutionary algorithms that provably obtain a 2-approximation. Investigating multi-valued representations, we show that variants of randomized local search and the (1+1) EA achieve this goal in expected pseudo-polynomial time. In order to speed up the process, we consider the use of step size adaptation in both algorithms and show that RLS obtains a 2-approximation in expected polynomial time while the (1+1) EA still encounters a pseudo-polynomial lower bound.
Friedrich, Tobias; Kötzing, Timo; Lagodzinski, J. A. Gregor; Neumann, Frank; Schirneck, Martin Analysis of the (1+1) EA on Subclasses of Linear Functions under Uniform and Linear ConstraintsFoundations of Genetic Algorithms (FOGA) 2017: 45–54
Linear functions have gained a lot of attention in the area of run time analysis of evolutionary computation methods and the corresponding analyses have provided many effective tools for analyzing more complex problems. In this paper, we consider the behavior of the classical (1+1) Evolutionary Algorithm for linear functions under linear constraint. We show tight bounds in the case where both the objective function and the constraint is given by the OneMax function and present upper bounds as well as lower bounds for the general case. Furthermore, we also consider the LeadingOnes fitness function.
Krejca, Martin S.; Witt, Carsten Lower Bounds on the Run Time of the Univariate Marginal Distribution Algorithm on OneMaxFoundations of Genetic Algorithms (FOGA) 2017: 65–79
The Univariate Marginal Distribution Algorithm (UMDA), a popular estimation of distribution algorithm, is studied from a run time perspective. On the classical OneMax benchmark function, a lower bound of \(Omega (\mu \sqrt n + n \log n)\), where \(\mu\) is the population size, on its expected run time is proved. This is the first direct lower bound on the run time of the UMDA. It is stronger than the bounds that follow from general black-box complexity theory and is matched by the run time of many evolutionary algorithms. The results are obtained through advanced analyses of the stochastic change of the frequencies of bit values maintained by the algorithm, including carefully designed potential functions. These techniques may prove useful in advancing the field of run time analysis for estimation of distribution algorithms in general.
Chauhan, Ankit; Friedrich, Tobias; Quinzan, Francesco Approximating Optimization Problems using EAs on Scale-Free NetworksGenetic and Evolutionary Computation Conference (GECCO) 2017: 235–242
It has been experimentally observed that real-world networks follow certain topologicalproperties, such as small-world, power-law etc. To study these networks, many random graph models, such as Preferential Attachment, have been proposed. In this paper, we consider the deterministic properties which capture power-law degree distribution and degeneracy. Networks with these properties are known as scale-free networks in the literature. Many interesting problems remain NP-hard on scale-free networks. We study the relationship between scale-free properties and the approximation-ratio of some commonly used evolutionary algorithms. For the Vertex Cover, we observe experimentally that the \((1+1)\) EA always gives the better result than a greedy local search, even when it runs for only \(O(n, \log(n))\) steps. We give the construction of a scale-free network in which a multi-objective algorithm and a greedy algorithm obtain optimal solutions, while the \((1+1)\) EA obtains the worst possible solution with constant probability. We prove that for the Dominating Set, Vertex Cover, Connected Dominating Set and Independent Set, the \((1+1)\) EA obtains constant-factor approximation in expected run time \(O(n, \log(n))\) and \(O(n^4)\) respectively. Whereas, GSEMO gives even better approximation than \((1+1)\) EA in expected run time \(O(n^3)\) for Dominating Set, Vertex Cover and Connected Dominating Set on such networks.
Friedrich, Tobias; Kötzing, Timo; Melnichenko, Anna Analyzing Search Heuristics with Differential EquationsGenetic and Evolutionary Computation Conference (GECCO) 2017: 313–314
Drift Theory is currently the most common technique for the analysis of randomized search heuristics because of its broad applicability and the resulting tight first hitting time bounds. The biggest problem when applying a drift theorem is to find a suitable potential function which maps a complex space into a single number, capturing the essence of the state of the search in just one value. We discuss another method for the analysis of randomized search heuristics based on the Theory of Differential Equations. This method considers the deterministic counterpart of the randomized process by replacing probabilistic outcomes by their expectation, and then bounding the error with good probability. We illustrate this by analyzing an Ant Colony Optimization algorithm (ACO) for the Minimum Spanning Tree problem (MST).
Doerr, Benjamin; Kötzing, Timo; Lagodzinski, J. A. Gregor; Lengler, Johannes Bounding Bloat in Genetic ProgrammingGenetic and Evolutionary Computation Conference (GECCO) 2017: 921–928
While many optimization problems work with a fixed number of decision variables and thus a fixed-length representation of possible solutions, genetic programming (GP) works on variable-length representations. A naturally occurring problem is that of bloat (unnecessary growth of solutions) slowing down optimization. Theoretical analyses could so far not bound bloat and required explicit assumptions on the magnitude of bloat. In this paper we analyze bloat in mutation-based genetic programming for the two test functions ORDER and MAJORITY. We overcome previous assumptions on the magnitude of bloat and give matching or close-to-matching upper and lower bounds for the expected optimization time. In particular, we show that the \((1+1)\) GP takes (i) \(\Theta(T_\text{init + n\, \log n)\) iterations with bloat control on ORDER as well as MAJORITY; and (ii) \(O(T_{\text{init ,log \,T_{\text{init + n(\log n)^3)\) and \(\Omega(T_\text{init + n \,\log n)\) (and \(\Omega(T_\text{init \,\log \,T_{\text{init}})\) for \(n = 1\)) iterations without bloat control on MAJORITY.
Doerr, Benjamin; Fischbeck, Philipp; Frahnow, Clemens; Friedrich, Tobias; Kötzing, Timo; Schirneck, Martin Island Models Meet Rumor SpreadingGenetic and Evolutionary Computation Conference (GECCO) 2017: 1359–1366
Island models in evolutionary computation solve problems by a careful interplay of independently running evolutionary algorithms on the island and an exchange of good solutions between the islands. In this work, we conduct rigorous run time analyses for such island models trying to simultaneously obtain good run times and low communication effort. We improve the existing upper bounds for the communication effort (i) by improving the run time bounds via a careful analysis, (ii) by setting the balance between individual computation and communication in a more appropriate manner, and (iii) by replacing the usual communicate-with-all-neighbors approach with randomized rumor spreading, where each island contacts a randomly chosen neighbor. This epidemic communication paradigm is known to lead to very fast and robust information dissemination in many applications. Our results concern islands running simple (1+1) evolutionary algorithms, we regard d-dimensional tori and complete graphs as communication topologies, and optimize the classic test functions OneMax and LeadingOnes.
Doerr, Benjamin; Doerr, Carola; Kötzing, Timo Unknown Solution Length Problems With No Asymptotically Optimal Run TimeGenetic and Evolutionary Computation Conference (GECCO) 2017: 1367–1374
We revisit the problem of optimizing a fitness function of unknown dimension; that is, we face a function defined over bit-strings of large length \(N\), but only \(n << N\) of them have an influence on the fitness. Neither the position of these relevant bits nor their number is known. In previous work, variants of the \((1 + 1)\) evolutionary algorithm (EA) have been developed that solve, for arbitrary \(s \in N\), such OneMax and LeadingOnes instances, simultaneously for all \(n \in N\), in expected time \(O(n(\log(n))^2 log \log(n) dots \log^{s-1 (n)(\log^{(s) (n))^1+\epsilon} )\)and \(O(n^2 \log(n) log \log(n) dots \log^{s-1 (n)(\log^{(s) (n))^1+\epsilon} )\), respectively; that is, in almost the same time as if n and the relevant bit positions were known. In this work, we prove the first, almost matching, lower bounds for this setting. For LeadingOnes, we show that, for every \(s \in N\), the \((1 + 1)\) EA with any mutation operator treating zeros and ones equally has an expected run time of \(\omega(n^2 \log(n) log \log(n) dots \log^{(s) (n))\) when facing problem size \(n\). Aiming at closing the small remaining gap, we realize that, quite surprisingly, there is no asymptotically best performance. For any algorithm solving, for all \(n\), all instances of size \(n\) in expected time at most \(T (n)\), there is an algorithm doing the same in time \(T'(n)\) with \(T' = o(T )\). For OneMax we show results of similar flavor.
Shi, Feng; Schirneck, Martin; Friedrich, Tobias; Kötzing, Timo; Neumann, Frank Reoptimization Times of Evolutionary Algorithms on Linear Functions Under Dynamic Uniform ConstraintsGenetic and Evolutionary Computation Conference (GECCO) 2017: 1407–1414
Thee investigations of linear pseudo-Boolean functions play a central role in the area of runtime analysis of evolutionary computing techniques. Having an additional linear constraint on a linear function is equivalent to the NP-hard knapsack problem and special problem classes thereof have been investigated in recent works. In this paper, we extend these studies to problems with dynamic constraints and investigate the runtime of different evolutionary algorithms to recompute an optimal solution when the constraint bound changes by a certain amount. We study the classical \((1+1)\) EA and population-based algorithms and show that they recompute an optimal solution very efficiently. Furthermore, we show that a variant of the \((1+(\lambda, \lambda))\) GA can recompute the optimal solution more efficiently in some cases.
Michail, Othon; Skretas, George; Spirakis, G. Paul On the Transformation Capability of Feasible Mechanisms for Programmable MatterInternational Colloquium on Automata, Languages and Programming (ICALP) 2017: 136:1–136:15
In this work, we study theoretical models of programmable matter systems. The systems under consideration consist of spherical modules, kept together by magnetic forces and able to perform two minimal mechanical operations (or movements): rotate around a neighbor and slide over a line. In terms of modeling, there are n nodes arranged in a 2-dimensional grid and forming some initial shape. The goal is for the initial shape A to transform to some target shape B by a sequence of movements. Most of the paper focuses on transformability questions, meaning whether it is in principle feasible to transform a given shape to another. We first consider the case in which only rotation is available to the nodes. Our main result is that deciding whether two given shapes A and B can be transformed to each other is in P. We then insist on rotation only and impose the restriction that the nodes must maintain global connectivity throughout the transformation. We prove that the corresponding transformability question is in PSPACE and study the problem of determining the minimum seeds that can make feasible otherwise infeasible transformations. Next we allow both rotations and slidings and prove universality: any two connected shapes A,B of the same number of nodes, can be transformed to each other without breaking connectivity. The worst-case number of movements of the generic strategy is \(\Theta(n^2)\). We improve this to \( \mathcal{O(n) \) parallel time, by a pipelining strategy, and prove optimality of both by matching lower bounds. We next turn our attention to distributed transformations. The nodes are now distributed processes able to perform communicate-compute-move rounds. We provide distributed algorithms for a general type of transformation.
Casel, Katrin; Fernau, Henning; Grigoriev, Alexander; Schmid, Markus L.; Whitesides, Sue Combinatorial Properties and Recognition of Unit Square Visibility GraphsInternational Symposium on Mathematical Foundations of Computer Science (MFCS) 2017: 30:1–30:15
Unit square (grid) visibility graphs (USV and USGV, resp.) are described by axis-parallel visibility between unit squares placed (on integer grid coordinates) in the plane. We investigate combinatorial properties of these graph classes and the hardness of variants of the recognition problem, i.e., the problem of representing USGV with fixed visibilities within small area and, for USV, the general recognition problem.
Chauhan, Ankit; Lenzner, Pascal; Melnichenko, Anna; Molitor, Louise Selfish Network Creation with Non-Uniform Edge CostSymposium on Algorithmic Game Theory (SAGT) 2017: 160–172
Network creation games investigate complex networks from a game-theoretic point of view. Based on the original model by Fabrikant et al. [PODC’03] many variants have been introduced. However, almost all versions have the drawback that edges are treated uniformly, i.e. every edge has the same cost and that this common parameter heavily influences the outcomes and the analysis of these games. We propose and analyze simple and natural parameter-free network creation games with non-uniform edge cost. Our models are inspired by social networks where the cost of forming a link is proportional to the popularity of the targeted node. Besides results on the complexity of computing a best response and on various properties of the sequential versions, we show that the most general version of our model has con- stant Price of Anarchy. To the best of our knowledge, this is the first proof of a constant Price of Anarchy for any network creation game.
Friedrich, Tobias; Ihde, Sven; Keßler, Christoph; Lenzner, Pascal; Neubert, Stefan; Schumann, David Efficient Best Response Computation for Strategic Network Formation under AttackSymposium on Algorithmic Game Theory (SAGT) 2017: 199–211
Inspired by real world examples, e.g. the Internet, researchers have introduced an abundance of strategic games to study natural phenomena in networks. Unfortunately, almost all of these games have the conceptual drawback of being computationally intractable, i.e. computing a best response strategy or checking if an equilibrium is reached is NP-hard. Thus, a main challenge in the field is to find tractable realistic network formation models. We address this challenge by investigating a very recently introduced model by Goyal et al. [WINE'16] which focuses on robust networks in the presence of a strong adversary who attacks (and kills) nodes in the network and lets this attack spread virus-like to neighboring nodes and their neighbors. Our main result is to establish that this natural model is one of the few exceptions which are both realistic and computationally tractable. In particular, we answer an open question of Goyal et al. by providing an efficient algorithm for computing a best response strategy, which implies that deciding whether the game has reached a Nash equilibrium can be done efficiently as well. Our algorithm essentially solves the problem of computing a minimal connection to a network which maximizes the reachability while hedging against severe attacks on the network infrastructure and may thus be of independent interest.
Chechik, Shiri; Cohen, Sarel; Fiat, Amos; Kaplan, Haim \( (1 + \varepsilon)\)-Approximate \(f\)-Sensitive Distance OraclesSymposium on Discrete Algorithms (SODA) 2017: 1479–1496
An \(f\)-Sensitive Distance Oracle with stretch a preprocesses a graph \( G(V, E) \) and produces a small data structure that is used to answer subsequent queries. A query is a triple consisting of a set \( F \subset E \) of at most \(f\) edges, and vertices \(s\) and \(t\). The oracle answers a query \( (F,s.,t) \) by returning a value \(d\) which is equal to the length of some path between \(s\) and \(t\) in the graph \(G \setminus F\) (the graph obtained from \(G\) by discarding all edges in \(F\)). Moreover, d is at most a times the length of the shortest path between \(s\) and \(t\) in \(G \setminus F\). The oracle can also construct a path between \(s\) and \(t\) in \(G \setminus F\) of length \(d\). To the best of our knowledge we give the first nontrivial f-sensitive distance oracle with fast query time and small stretch capable of handling multiple edge failures. Specifically, for any \( \mathcal{O(\frac{\log n\log \log n ) \) and a fixed \( \varepsilon > 0 \) our oracle answers queries \( (F,s,t) \) in time \( \mathcal{O(l) \) with \( (1 + \varepsilon) \) stretch using a data structure of size \( n^2+ \mathcal{O(1) \) For comparison, the naive alternative requires \( m^f n^2 \) space for sublinear query time.
Bläsius, Thomas; Radermacher, Marcel; Rutter, Ignaz How to Draw a PlanarizationCurrent Trends in Theory and Practice of Computer Science (SOFSEM) 2017: 295–308
We study the problem of computing straight-line drawings of non-planar graphs with few crossings. We assume that a crossingminimization algorithm is applied first, yielding a planarization, i.e., a planar graph with a dummy vertex for each crossing, that fixes the topology of the resulting drawing. We present and evaluate two different approaches for drawing a planarization in such a way that the edges of the input graph are as straight as possible. The first approach is based on the planarity-preserving force-directed algorithm ImPrEd, the second approach, which we call Geometric Planarization Drawing, iteratively moves vertices to their locally optimal positions in the given initial drawing.
Friedrich, Tobias; Ihde, Sven; Keßler, Christoph; Lenzner, Pascal; Neubert, Stefan; Schumann, David Brief Announcement: Efficient Best Response Computation for Strategic Network Formation under AttackSymposium on Parallelism in Algorithms and Architectures (SPAA) 2017: 321–323
Inspired by real world examples, e.g. the Internet, researchers have introduced an abundance of strategic games to study natural phenomena in networks. Unfortunately, almost all of these games have the conceptual drawback of being computationally intractable, i.e. computing a best response strategy or checking if an equilibrium is reached is NP-hard. Thus, a main challenge in the field is to find tractable realistic network formation models. We address this challenge by establishing that the recently introduced model by Goyal et al.[WINE'16], which focuses on robust networks in the presence of a strong adversary, is a rare exception which is both realistic and computationally tractable. In particular, we sketch an efficient algorithm for computing a best response strategy, which implies that deciding whether the game has reached a Nash equilibrium can be done efficiently as well. Our algorithm essentially solves the problem of computing a minimal connection to a network which maximizes the reachability while hedging against severe attacks on the network infrastructure.
Cseh, Ágnes; Matuschke, Jannik New and Simple Algorithms for Stable Flow ProblemsWorkshop Graph-Theoretic Concepts in Computer Science (WG) 2017: 206–219
Stable flows generalize the well-known concept of stable matchings to markets in which transactions may involve several agents, forwarding flow from one to another. An instance of the problem consists of a capacitated directed network, in which vertices express their preferences over their incident edges. A network flow is stable if there is no group of vertices that all could benefit from rerouting the flow along a walk. Fleiner [13] established that a stable flow always exists by reducing it to the stable allocation problem. We present an augmenting-path algorithm for computing a stable flow, the first algorithm that achieves polynomial running time for this problem without using stable allocation as a black-box subroutine. We further consider the problem of finding a stable flow such that the flow value on every edge is within a given interval. For this problem, we present an elegant graph transformation and based on this, we devise a simple and fast algorithm, which also can be used to find a solution to the stable marriage problem with forced and forbidden edges. Finally, we study the highly complex stable multicommodity flow model by Király and Pap [24]. We present several graph-based reductions that show equivalence to a significantly simpler model. We further show that it is NP-complete to decide whether an integral solution exists.