Imagine \(m\) machines (e.g., a compute cluster or robots in a production line) on which you want to process \(n\) jobs (think datasets, workpieces, etc.). Which job should go to which machine in which order to minimize makespan? How would you maximize throughput? Can you ensure that you meet all the deadlines for the individual jobs?
The underlying problem is that of scheduling jobs on machines. With each job come processing time requirements on different machines and possibly release times, deadlines, precedences, or dependencies. The machines, in turn, might be all the same or of varying power or purpose, and there might be requirements to the order in which jobs go through the machines. Finally, one might want to optimize for one of many possible objective functions such as makespan, maximum lateness of any job, or the number of late jobs. Computing an (approximately) optimal schedule that assigns jobs to machines for specific time intervals is a core problem of algorithm engineering. The "scheduling zoo" [2] is a collection of many results on scheduling problems.
We analyze scheduling problems with an additional twist: Instead of running an algorithm to compute a complete schedule and only then using it, we expect a solution algorithm to enumerate schedule entries \((j, i, s, f)\) with the meaning: "Process job \(J_j\) on machine \(M_i\) from start time \(s\) to finish time \(f\)." We now aim for an algorithm with little preprocessing time and small delay between the enumerated solution parts. Then, we can execute the schedule as soon as the first entries are fixed, which might be way before the algorithm finalizes the whole solution.