## Organization

Due to the COVID-19 pandemic, this course will be offered **online.** Thus, it is important that all participants enroll by **April 22** via our Moodle page.

## Motivation

**Mathematical optimization** describes the task of finding solutions to a given problem that satisfy it in the best way possible (according to some measure). An easy method taught in school is the maximization or minimization of a given differentiable function \(f\) of \(x\) by setting the first derivative of \(f\) equal to \(0\) and then solving for \(x\). Unfortunately, relevant problems are usually a lot harder to solve. For example, many NP-complete decision problems can be phrased as important optimization problems.

One way of approaching such hard problems is to only look for a sufficiently good (and not necessarily optimal) solution. Algorithms that guide the search process to such solutions are called **heuristics** and are commonly applied. For classical problems, such as the Traveling Salesman Problem, heuristics are based on years of experience and research in that area. However, if one is faced with a different problem, these heuristics may not apply, and one has to start from scratch.

In order to not come up with a new optimization algorithm for each problem, **general-purpose solvers** exist, which try to solve a wide range of optimization problems. This is done by only making very general assumptions about the problem, for example, that the quality of a solution can be determined efficiently, and then using them for the optimization process. Oftentimes, such solvers use randomness to make decision about where to search next. The resulting algorithms are commonly referred to as **randomized search heuristics** (RSHs).

One important class of RSHs are **evolutionary algorithms** (EAs), which draw inspiration from the concept of natural evolution. EAs are algorithms that maintain and evolve a multiset of potential solutions (the population). In each iteration, new solutions are randomly generated from the population by means of various operations, such as mutation or recombination. Afterward, a certain subset of best solutions is selected for the next iteration. This simple concept has proven powerful, and EAs are applied to real-world problems often with great success.

## Content and Goals of This Lecture

We are interested in understanding how to analyze randomized search heuristics, especially evolutionary algorithms (EAs), in a **mathematically rigorous** manner. To this end, we look at different tools from probability theory that aid in the analysis, and we understand when to apply them. These tools include arithmetic inequalities (such as Bernoulli’s), stochastic inequalities (such as Markov’s and Chernoff’s), and drift theory (based on the theory of sub- and supermartingales). In the end, the participants should be capable of analyzing simple randomized processes similar to those seen in the lecture.

Please note that the introductory part of this the course is shared with our course Probability and Computing. After this initial segment, the courses will separate, as this course is dedicated to introducing tools used in the analysis of EAs (mostly drift theory), whereas Andreas’s course goes into detail about Markov chains, mixing times, and couplings.

## Structure of This Course

Each week, the participants are given homework, for example, reading material provided by the course or solving some exercises. The results of this homework are then discussed and expanded on during the lecture. Questions and discussions during the lecture are explicitly welcomed.

## Examination

The grade for this course is fully determined by a written final exam. In case of a small number of participants, an oral exam may have to be taken instead. Everyone who enrolls for this course is eligible to participate in the final exam. No other requirements (such as the homeworks) apply.