In the Algorithm Engineering group I am one of the mentors to the new PhD students. I also maintain the group's news feed. Besides my studies I try to increase article quality in the German Wikipedia, especially in the math section.
In 2015 I was a tutor at the HPI Schülerkolleg teaching school children basic Computer Science.
Kötzing, Timo; Schirneck, Martin; Seidel, KarenNormal Forms in Semantic Language Identification. Algorithmic Learning Theory (ALT) 2017
We consider language learning in the limit from text where all learning restrictions are semantic, that is, where any conjecture may be replaced by a semantically equivalent conjecture. For different such learning criteria, starting with the well-known TxtGBc-learning, we consider three different normal forms: strongly locking learning, consistent learning and (partially) set-driven learning. These normal forms support and simplify proofs and give insight into what behaviors are necessary for successful learning (for example when consistency in conservative learning implies cautiousness and strong decisiveness). We show that strongly locking learning can be assumed for partially set-driven learners, even when learning restrictions apply. We give a very general proof relying only on a natural property of the learning restriction, namely, allowing for simulation on equivalent text. Furthermore, when no restrictions apply, also the converse is true: every strongly locking learner can be made partially set-driven. For several semantic learning criteria we show that learning can be done consistently. Finally, we deduce for which learning restrictions partial set-drivenness and set-drivenness coincide, including a general statement about classes of infinite languages. The latter again relies on a simulation argument.
Shi, Feng; Schirneck, Martin; Friedrich, Tobias; Kötzing, Timo; Neumann, FrankReoptimization Times of Evolutionary Algorithms on Linear Functions Under Dynamic Uniform Constraints. Genetic and Evolutionary Computation Conference (GECCO) 2017: 1407-1414
Thee investigations of linear pseudo-Boolean functions play a central role in the area of runtime analysis of evolutionary computing techniques. Having an additional linear constraint on a linear function is equivalent to the NP-hard knapsack problem and special problem classes thereof have been investigated in recent works. In this paper, we extend these studies to problems with dynamic constraints and investigate the runtime of different evolutionary algorithms to recompute an optimal solution when the constraint bound changes by a certain amount. We study the classical \((1+1)\) EA and population-based algorithms and show that they recompute an optimal solution very efficiently. Furthermore, we show that a variant of the \((1+(\lambda, \lambda))\) GA can recompute the optimal solution more efficiently in some cases.
Island models in evolutionary computation solve problems by a careful interplay of independently running evolutionary algorithms on the island and an exchange of good solutions between the islands. In this work, we conduct rigorous run time analyses for such island models trying to simultaneously obtain good run times and low communication effort. We improve the existing upper bounds for the communication effort (i) by improving the run time bounds via a careful analysis, (ii) by setting the balance between individual computation and communication in a more appropriate manner, and (iii) by replacing the usual communicate-with-all-neighbors approach with randomized rumor spreading, where each island contacts a randomly chosen neighbor. This epidemic communication paradigm is known to lead to very fast and robust information dissemination in many applications. Our results concern islands running simple (1+1) evolutionary algorithms, we regard d-dimensional tori and complete graphs as communication topologies, and optimize the classic test functions OneMax and LeadingOnes.
Friedrich, Tobias; Kötzing, Timo; Lagodzinski, J. A. Gregor; Neumann, Frank; Schirneck, MartinAnalysis of the (1+1) EA on Subclasses of Linear Functions under Uniform and Linear Constraints. Foundations of Genetic Algorithms (FOGA) 2017: 45-54
Linear functions have gained a lot of attention in the area of run time analysis of evolutionary computation methods and the corresponding analyses have provided many effective tools for analyzing more complex problems. In this paper, we consider the behavior of the classical (1+1) Evolutionary Algorithm for linear functions under linear constraint. We show tight bounds in the case where both the objective function and the constraint is given by the OneMax function and present upper bounds as well as lower bounds for the general case. Furthermore, we also consider the LeadingOnes fitness function.
Bläsius, Thomas; Friedrich, Tobias; Schirneck, MartinThe Parameterized Complexity of Dependency Detection in Relational Databases. International Symposium on Parameterized and Exact Computation (IPEC) 2016: 6:1--6:13
We study the parameterized complexity of classical problems that arise in the profiling of relational data. Namely, we characterize the complexity of detecting unique column combinations (candidate keys), functional dependencies, and inclusion dependencies with the solution size as parameter. While the discovery of uniques and functional dependencies, respectively, turns out to be W-complete, the detection of inclusion dependencies is one of the first natural problems proven to be complete for the class W. As a side effect, our reductions give insights into the complexity of enumerating all minimal unique column combinations or functional dependencies.
Friedrich, Tobias; Kötzing, Timo; Krejca, Martin S.; Nallaperuma, Samadhi; Neumann, Frank; Schirneck, MartinFast Building Block Assembly by Majority Vote Crossover. Genetic and Evolutionary Computation Conference (GECCO) 2016: 661-668
Different works have shown how crossover can help with building block assembly. Typically, crossover might get lucky to select good building blocks from each parent, but these lucky choices are usually rare. In this work we consider a crossover operator which works on three parent individuals. In each component, the offspring inherits the value present in the majority of the parents; thus, we call this crossover operator majority vote. We show that, if good components are sufficiently prevalent in the individuals, majority vote creates an optimal individual with high probability. Furthermore, we show that this process can be amplified: as long as components are good independently and with probability at least \(1/2+\delta\), we require only \(O(\log 1/\delta + \log \log n)\) successive stages of majority vote to create an optimal individual with high probability! We show how this applies in two scenarios. The first scenario is the Jump test function. With sufficient diversity, we get an optimization time of \(O(n \log n)\) even for jump sizes as large as \(O(n^(1/2-\epsilon)})\). Our second scenario is a family of vertex cover instances. Majority vote optimizes this family efficiently, while local searches fail and only highly specialized two-parent crossovers are successful.
Kötzing, Timo; Schirneck, MartinTowards an Atlas of Computational Learning Theory. Symposium on Theoretical Aspects of Computer Science (STACS) 2016: 47:1-47:13
A major part of our knowledge about Computational Learning stems from comparisons of the learning power of different learning criteria. These comparisons inform about trade-offs between learning restrictions and, more generally, learning settings; furthermore, they inform about what restrictions can be observed without losing learning power. With this paper we propose that one main focus of future research in Computational Learning should be on a structured approach to determine the relations of different learning criteria. In particular, we propose that, for small sets of learning criteria, all pairwise relations should be determined; these relations can then be easily depicted as a map, a diagram detailing the relations. Once we have maps for many relevant sets of learning criteria, the collection of these maps is an Atlas of Computational Learning Theory, informing at a glance about the landscape of computational learning just as a geographical atlas informs about the earth. In this paper we work toward this goal by providing three example maps, one pertaining to partially set-driven learning, and two pertaining to strongly monotone learning. These maps can serve as blueprints for future maps of similar base structure.
Schirneck, MartinOn Restrictions in Computational Language Learning. 2015
Dean's prize for best thesis ("Examenspreis des Dekans").
In 1990 Fulk proved that partially set-drivenness (rearrangement-independence) does not weaken the power of unrestricted computational language learning. The question arises whether this result still holds if paired with various learning restrictions. We investigate the influence of two main categories of such restrictions, namely content-based and delayable ones. An adaption of Fulk’s theorem is verified for content-based learning and some delayable restrictions regarding U-shaped learning. On the other hand, we give an example criterion of delayable learning — explanatory learning from text by a strongly monotone scientist — for which partially set-drivenness does reduce the learning power. Additionally, the interdependence of these restrictions with several other interaction operators and success criteria are explored.
Schirneck, MartinBetrachtungen über ein distanzbasiertes Klassifikationsverfahren. 2012
In dieser Arbeit wird ein neuartiges distanzbasiertes Klassifikationsverfahren zur Kategorisierung numerischer Datenvektoren bei binärer Klassenteilung entworfen. Die dazu verwendeten Metriken basieren auf dem Volumen endlicher Vereinigungen d-dimensionaler Kugeln im Euklidischen Raum. Die angegebene Berechnungsvorschrift behandelt dabei konkret den zweidimensionalen Fall. Es werden außerdem einige topologische Eigenschaften der neuen Methode aufgezeigt. Von besonderem Interesse ist hier das Verhalten der medialen Achse der auftretenden Distanzfunktionen. Die gewonnen Erkenntnisse legen nahe, dass das entwickelte Verfahren fähig ist, zwischen der k-Nächste-Nachbarn-Methode und einem linearen Klassifikator zu interpolieren.
Our research focus is on theoretical computer science and algorithm engineering. We are equally interested in the mathematical foundations of algorithms and developing efficient algorithms in practice. A special focus is on random structures and methods.