A modern approach to medical image analysis involves deep learning algorithms that form hierarchical feature representations of the image input. These high-dimensional features are stored in cache, which leads to excessive memory usage in graphics processing units (GPUs). For that reason, today’s deep learning algorithms focus primarily on low-resolution images. For instance, models trained on the ImageNet dataset are often resized to 224x224 pixels. In order to apply deep learning to large medical images of all sorts, novel methods need to be developed. In this project, we strive to develop methods fulfilling the following requirements:
- Ability to process only relevant image parts in detail
- Scale to arbitrarily many relevant image parts
- Scale to arbitrarily large input images, such as gigapixel microscope slides
- Training with a single GPU
- Ability to model object part dependencies