Hasso-Plattner-Institut
Prof. Dr. Tobias Friedrich
 

Introduction

When analyzing a problem’s complexity, one usually takes a look at the total time it takes an algorithm to fully process given input data and deliver the corresponding output data. From an algorithmic perspective we are interested in fast algorithms that minimize this total time, especially for large inputs. Such an algorithm gives an upper bound on the problem’s time complexity. However, it does not reveal anything about the question, whether the same problem admits for a faster solution. Instead, one has to apply tools from complexity theory to proof lower bounds on the time complexity.

Complexity Theory

The most prominent lower bound in theoretical computer science is the \(\Omega(n \log(n))\) bound on time complexity of comparison-based sorting. This bound is neatly matched by an \(O(n\log(n))\) upper bound derived from sorting algorithms such as MergeSort or HeapSort.

For many problems however, we don’t know any non-trivial lower bound – that is, a lower bound higher than the input and output size. Instead, we base conditional complexity lower bounds for those problems on the conjectured complexity of other problems, most prominently the \(P \neq NP\)-conjecture [1]. This tries to separate problems into those, which can be solved “efficiently” with a deterministic polynomial-time algorithm, and those which cannot. In recent years, scientists have come up with several new conjectures to establish more fine-grained complexity classes in \(P\), e.g. the Orthogonal-Vector-Hypothesis and the All-Pairs-Shortest-Paths Hypothesis; the latter claiming that the well-known Floyd-Warshall-Algorithm for APSP is essentially optimal, as there is no \(\varepsilon > 0\) such that APSP can be solved in \(O(n^{3-\varepsilon})\) time. See [2] for a brief introduction to the topic.

Enumeration Problems and Algorithms

As stated before, most traditional complexity theory investigates total time: Given an input of size \(n\), how long does it take an algorithm to produce the output? Mostly, one was content with a polynomial runtime; that is, for some constant \(c > 0\), a runtime of \(O(n^c)\). Such runtime was denoted to be “efficient”.

processing...

However, in today’s systems the input size \(n\) is that large, that even small polynomials can lead to infeasible runtimes. In addition, systems rarely consist of a single data processing step, but are made of multiple algorithms run in series on potentially many machines. If one of those algorithms takes a lot of time to produce its output, the rest of the pipeline is stalled, and computing power is unused.

To solve this issue, we try to solve problems by enumerating small parts of the solution quickly. Even though the total runtime of our algorithm cannot be better (and might even be worse) than when producing the output in one piece, following steps in the data processing pipeline can already work on partial outputs long before the whole solution is provided and by that cut down on the overall runtime of the pipeline. For example, such a setup was very successfully implemented in the area of DNA sequence alignment [3].

preprocessing...enumerating...

From a theory perspective, the main question we want to answer for this kind of enumeration problems is: How long must a subsequent pipeline step wait in the worst case until the next partial solution is provided? We call this time worst case delay. As algorithms often need to prepare data structures or at least read the full input before starting the enumeration, we separately analyze the preprocessing time the algorithm can spend before working on the first partial solution.

Our Research

In our work we focus on analyzing well-known problems in P with the new perspective of enumerating partial solutions with little delay. On the one hand, this speeds up previously described pipelines. Additionally, our approach provides new insights into problems without a known non-trivial lower bound. By trying to homogeneously distribute an algorithms computing time on the produced partial solutions, the analysis might reveal which parts of the final output are the hardest to compute – leading to new lower bounds or to faster algorithms, that optimize the solution of this specific sub-problem.

Example

As a toy example, we showcase this method on the comparison-based sorting problem:

Given an unsorted array of \(n\) objects, sort them according to the total order \(\leq\).

It is well-known, that this problem cannot be solved with \(o(n \log(n))\) comparisons and thus needs at least \(\Omega(n \log(n))\) total time. As the runtime of MergeSort matches this lower bound, the problem is in \(\Theta(n \log(n))\). But can this runtime be homogeneously distributed over partial solutions? In order to answer this question, we first must define what the algorithm should produce as partial solutions.

Given an unsorted array of \(n\) objects, enumerate them from smallest to largest according to the total order \(\leq\).

As it takes \(\Omega(n)\) time to find the smallest object, every algorithm needs at least linear preprocessing time or linear delay to solve this enumeration problem. As we know an unconditional lower bound on the total time of \(\Omega(n \log(n))\) and there are \(n\) partial solutions, we also know that the delay must be in \(\Omega(\log(n))\). As HeapSort initializes a MinHeap in \(\Theta(n)\) time and afterwards extracts the next smallest item from it in \(O(\log(n))\) time each, these lower bounds can be matched. Thus, comparison-based sorting can be done through homogeneous enumeration with HeapSort. Note that MergeSort for example is not sorting homogeneously, as it takes \(\Omega(n \log(n))\) time until the smallest object is known.

References

  1. Cook, Stephen A. „The complexity of theorem-proving procedures“. In Proceedings of the third annual ACM symposium on Theory of computing, 151–158. STOC ’71. Shaker Heights, Ohio, USA: Association for Computing Machinery, 1971. https://doi.org/10.1145/800157.805047.
  2. Bringmann, Karl. „Fine-Grained Complexity Theory (Tutorial)“, 2019. https://doi.org/10.4230/lipics.stacs.2019.4.
  3. Lindner, Martin S., Benjamin Strauch, Jakob M. Schulze, Simon H. Tausch, Piotr W. Dabrowski, Andreas Nitsche, und Bernhard Y. Renard. „HiLive: Real-Time Mapping of Illumina Reads While Sequencing“. Bioinformatics 33, Nr. 6 (15. März 2017): 917–319. https://doi.org/10.1093/bioinformatics/btw659.