My research revolves around machine learning in medical imaging with a focus on semantic segmentation. There are many interesting applications of semantic segmentation, e.g. computer-aided detection and diagnosis of diseases, volume calculations to assess the suitability of transplanting organs such as liver, organs-of-risk delineation in radiotherapy or correlating image features with clinical/molecular features in radiomics.
State-of-the-art approaches for semantic segmentation use deep convolutional neural networks, which require many labeled images. These labels are difficult two acquire because it takes medical experts such as radiologists to create them. This is infeasible given the number of use cases, imaging modalities, acquisition protocols and most importantly the expert’s limited time.
Consider how fast we humans learn new things. We do not need a teacher who tells us multiple times what a previously unknown object represents. A child seeing e.g. a dog for the first time will be able to identify different dogs as the same category, even if they differ tremendously and even though the child may not yet know about the class label.
I’m particularly interested in how we can reproduce these capabilities in computers and leverage them in medical imaging. My vision is that each acquired medical image is analyzable by a machine, similar to how a radiologist is able to analyze them over a broad range of use cases.
Have a look at the following high-level picture, which shows how such a system may be realized and used in a clinical context.