|Hyperbolic Random Graphs - Giants in Pizza Slices
Hyperbolic random graphs share many common properties with complex real-world networks; e.g.,
small diameter and average distance, large clustering coefficient, and a power-law degree sequence.
Thus, when analyzing algorithms for large networks, potentially more realistic results can be achieved
by assuming the input to be a hyperbolic random graph. For this, a deeper understanding of the emerging
structure is key to enable a fitting algorithm design for hyperbolic random graphs. Of special interest
is the emerging giant component, i.e., the largest connected component that lives within the hyperbolic plane.
The talk aims to give a gentle introducion to the model and then discusses our results regarding the
giant component for hyperbolic random graphs. The main focus here lies on giving the key ideas that
lead to the result of our 'Pizza Slice Lemma', i.e., the size of a giant component that only lives within a
hyperbolic pizza slice. The talk concludes by giving an outlook on algorithmic implications of the Pizza Slice Lemma.
|The Common-Neighbors Metric is Noise-Robust and Reveals Substructures of Real-World Networks
Real-world networks typically display a complex structure that is hard to explain by a single model. A common approach is to partition the edges of the network into disjoint simpler structures. An important property in this context is locality—incident vertices usually have many common neighbors. This allows to classify edges into two groups, based on the number of the common neighbors of their incident vertices. Formally, this is captured by the common-neighbors (CN) metric, which forms the basis of many metrics for detecting outlier edges. Such outliers can be interpreted as noise or as a substructure.
We aim to understand how useful the metric is, and empirically analyze several scenarios. We randomly insert outlier edges into real-world and generated graphs with high locality, and measure the metric accuracy for partitioning the combined edges. In addition, we use the metric to decompose real-world networks, and measure properties of the partitions. Our results show that the CN metric is a very good classifier that can reliably detect noise up to extreme levels (83 % noisy edges). We also provide mathematically rigorous analyses on special random-graph models. Last, we find the CN metric consistently decomposes real-world networks into two graphs with very different structures.
|The Weighted Uncovered Set and its Margin of Victory
|Approximate Max-Flow Min-Multicut Theorem for Graphs of Bounded Treewidth
This talk will be a first version of a talk that will be given at STOC 23'. In the paper presented, we prove an approximate max-multiflow min-multicut theorem for bounded treewidth graphs. In particular, we show the following: Given a treewidth-r graph, there exists a (fractional) multicommodity flow of value f, and a multicut of capacity c such that f ≤ c ≤ O(ln (r+1))*f. It is well known that the multiflow-multicut gap on an r-vertex (constant degree) expander graph can be \Omega(ln r), and hence our result is tight up to constant factors. Our proof is constructive, and we also obtain a polynomial time O(ln (r+1))-approximation algorithm for the minimum multicut problem on treewidth-r graphs. Our algorithm proceeds by rounding the optimal fractional solution to the natural linear programming relaxation of the multicut problem. We introduce novel modifications to the well-known region growing algorithm to facilitate the rounding while guaranteeing at most a logarithmic factor loss in the treewidth.
|Why Evolutionary Algorithms?
I'll show some background of what evolutionary algorithms are doing and what they're used for. Then I dive into what methods are used to prove things, and discuss some simple theorems from the area.
|Jonathan Gadea Harder
Xiaoyue Sherry Li
|Analyses of Evolutionary Algorithms
In this talk, our three young EA scientists will go a bit deeper from the last week seminar, why choose Evolutionary Algorithm, to how we analyze it.
First Aishwarya will show some examples of how drift theorems are used as a tool in the runtime analysis of evolutionary algorithms. Secondly, Sherry will use a project and three research questions to present how we analyze the algorithm experimentally. Lastly we give a brief overview of the application of evolutionary algorithms for combinatorial problems, with an emphasis on maximum matching.
|Successive shortest paths and minimum spanning trees
Let G be a complete graph on n vertices, with i.i.d. edge weights
uniform on (0, 1). Given two random vertices s and t, it is known that
the shortest path P1 from s to t has cost asymptotically (ln n)/n. It is
also known that the MST T1 of G has cost asymptotically ζ(3). We
introduce “successive” versions of both problems.
If the edges of P1 are deleted from G, what is the cost of the cheapest remaining path P2? In general, what is the cost of the cheapest path Pk edge-disjoint from all previous paths? Where w(Pk) is the cost of Pk, we show that, uniformly for all k from 1 to n−1,
w(Pk)/(2k/n+(ln n)/n) converges to 1 in probability. (Joint work with S. Gerke and B. Mezei.)
Likewise, what is the weight w(Tk) of the cheapest spanning tree edge-disjoint from all previous ones? We show that each w(Tk) converges in probability to some γk, with 2k−2√k < γk < 2k+2√k. This and further results follow from structural properties of inhomogeneous random graphs naturally defined by Kruskal’s algorithm. (Joint work
with S. Janson.)
|Being an Influencer is Hard: The Complexity of Influence Maximization in Temporal Graphs with a Fixed Source
We consider the influence maximization problem over a temporal graph, where there is a single fixed source.
We deviate from the standard model of influence maximization, where the goal is to choose the set of most influential vertices.
Instead, in our model we are given a fixed vertex, or source, and the goal is to find the best time steps to transmit so that the influence of this vertex is maximized.
We frame this problem as a spreading process that follows a variant of the susceptible-infected-susceptible (SIS) model and we focus on four objective functions.
In the \problemone objective, the goal is to maximize the total number of vertices that get infected at least once.
In the \problemtwo objective, the goal is to maximize the number of vertices that are infected at the same time step.
In the \problemthree objective, the goal is to maximize the number of vertices that are infected at a given time step.
Finally, in \problemfour, the goal is to maximize the total number of vertices that get infected every $d$ time steps.
We perform a thorough complexity theoretic analysis for these four objectives over three different scenarios: (1) the unconstrained setting where the source can transmit whenever it wants; (2) the window-constrained setting where the source has to transmit at either a predetermined, or a shifting window; (3) the periodic setting where the temporal graph has a small period.
We prove that all of these problems, with the exception of \problemone for periodic graphs, are intractable even for very simple underlying graphs.
Joint work with Argyrios Deligkas, Michelle Doering, Eduard Eiben, George Skretas
|Analysis of the (1+1) EA on LeadingOnes with Constraints
Understanding how evolutionary algorithms perform on constrained problems has gained increasing attention in recent years. In this paper, we study how evolutionary algorithms optimize constrained versions of the classical LeadingOnes problem. We first provide a run time analysis for the classical (1+1)~EA on the LeadingOnes problem with a deterministic cardinality constraint, giving $\Theta(n (n-B)\log(B) + n^2)$ as the tight bound. Our results show that the behaviour of the algorithm is highly dependent on the constraint bound of the uniform constraint. Afterwards, we consider the problem in the context of stochastic constraints and provide insights using experimental studies on how the ($\mu$+1)~EA is able to deal with these constraints in a sampling-based setting.
|Rodolphe Le Riche
|Bayesian and evolutionary optimization: searching from both sides
Bayesian and evolutionary optimization are two fundamental approaches to the global optimization of black-box functions.
This talk first consists in an introduction to Bayesian optimization, in particular the TREGO algorithm that has an additional, important, trust region mechanism.
In addition, we discuss the remarkable complementarities between Bayesian and evolutionary optimization: how the problem is modeled, what evolutionary optimization can do for Bayesian optimization and vice versa.
We conclude with two principles for creating evolutionary variation operators from a Gaussian process: a crossover defined as a barycenter in features space and a Bayesian variation operator.
|Solving Directed Feedback Vertex Set by Iterative Reduction to Vertex Cover
In the Directed Feedback Vertex Set (DFVS) problem, one is given a directed graph G = (V, E) and wants to find a minimum cardinality set S ⊆ V such that G − S is acyclic. DFVS is a fundamental problem in computer science and finds applications in areas such as deadlock detection. The problem was the subject of the 2022 PACE coding challenge. We develop a novel exact algorithm for the problem that is tailored to perform well on instances that are mostly bi-directed. For such instances, we adapt techniques from the well-researched vertex cover problem. Our core idea is an iterative reduction to vertex cover. To this end, we also develop a new reduction rule that reduces the number of not bi-directed edges.
With the resulting algorithm, we were able to win third place in the exact track of the PACE challenge. We perform computational experiments and compare the running time to other exact algorithms, in particular to the winning algorithm in PACE. Our experiments show that we outpace the other algorithms on instances that have a low density of uni-directed edges.
|Presentation of their Bachelor Thesis
Jonathan Gadea Harder
|Practice Talks for IJCAI'23
Those three talks will be practice talks for the speakers in preparation for IJCAI'23. Each of them will present the paper they are going to present there.
Michelle will present us an extension of Schelling's Model. This model explains residential segregation in an elegant agent-based way, by giving agents a type and a suitable objective function that depends on their neighbours. In Michelle's paper, the agents will not be of different types but rather just expressed as numbers. This opens the door to many different applications but also to a lot more potential than discrete types. The authors study the existence and computation of equilibria and provide bounds on the Price of Anarchy and Stability. Also, they present simulation results that compare their models and shed light on the obtained equilibria for their variants.
Jonathan will present us a work that tweaks the classical Ressource Selection Game. In that setting, agents need to find a way to maximise their utility by not sharing too much of the ressources they pick. In the Jonathan's paper, the agents are heterogeneous and strive to share ressources but only with agents similar to themselves. In the paper, the authors showed existence and quality of equilibria as well as some complexity bounds.
George will present us a brand new model called "Temporal Network Creation Game". This model brings together Temporal Graphs and Game Theoretic Network Formation. In the paper, the authors brought this model to light and proved prove results on the convergence to and the existence of equilibrium networks, on the complexity of finding best agent strategies, and on the quality of the equilibria.
|Presentation of their Bachelor Thesis
Jan Fehse: Determining Web Platforms with High Shares in Search Results
Nicola Kössler: An empirical study about the network properties of the VGIRGs
Jakob Timm: Tolerant Closeness Testing on Partially Known Discrete Distributions
Jenny Sommerfeld: Simulating Detection for Brownian Motion via Reflective Walk on Spheres
|3-Objective Pareto Optimization for Problems with Chance Constraints
Evolutionary multi-objective algorithms have successfully been used in the context of Pareto optimization where a given constraint is relaxed into an additional objective. In this paper, we explore the use of 3-objective formulations for problems with chance constraints. Our formulation trades off the expected cost and variance of the stochastic component as well as the given deterministic constraint. We point out benefits that this 3-objective formulation has compared to a bi-objective one recently investigated for chance constraints with Normally distributed stochastic components. Our analysis shows that the 3-objective formulation allows to compute all required trade-offs using 1-bit flips only, when dealing with a deterministic cardinality constraint. Furthermore, we carry out experimental investigations for the chance constrained dominating set problem and show the benefit for this classical NP-hard problem.
Based on work that appeared at IJCAI 2022 and GECCO 2023. Joint work with Carsten Witt.
|Perfect Sampling for Hard Spheres from Strong Spatial Mixing
We provide a perfect sampling algorithm for the hard-sphere model on subsets of R^d with expected running time linear in the volume under the assumption of strong spatial mixing. A large number of perfect and approximate sampling algorithms have been devised to sample from the hard-sphere model, and our perfect sampling algorithm is efficient for a range of parameters for which only efficient approximate samplers were previously known and is faster than these known approximate approaches. Our methods also extend to the more general setting of Gibbs point processes interacting via finite-range, repulsive potentials.