Hasso-Plattner-Institut
HPI Digital Health Center
  
 

Benjamin Bergner

Research Assistant, PhD Candidate

  Phone: +49 (331) 5509-4823
  Fax: +49 (331) 5509-163
  Email: benjamin.bergner(at)hpi.de
  Room: G-2.2.16 (Campus III)
  Profiles:  

Research Topics

My research revolves around machine learning in medical imaging with a focus on semantic segmentation. There are many interesting applications of semantic segmentation, e.g. computer-aided detection and diagnosis of diseases, volume calculations to assess the suitability of transplanting organs such as liver, organs-of-risk delineation in radiotherapy or correlating image features with clinical/molecular features in radiomics.

State-of-the-art approaches for semantic segmentation use deep convolutional neural networks, which require many labeled images. These labels are difficult two acquire because it takes medical experts such as radiologists to create them. This is infeasible given the number of use cases, imaging modalities, acquisition protocols and most importantly the expert’s limited time.

Consider how fast we humans learn new things. We do not need a teacher who tells us multiple times what a previously unknown object represents. A child seeing e.g. a dog for the first time will be able to identify different dogs as the same category, even if they differ tremendously and even though the child may not yet know about the class label.

I’m particularly interested in how we can reproduce these capabilities in computers and leverage them in medical imaging. My vision is that each acquired medical image is analyzable by a machine, similar to how a radiologist is able to analyze them over a broad range of use cases.

Have a look at the following high-level picture, which shows how such a system may be realized and used in a clinical context.

Images, which may be extracted from a Picture Archiving and Communication System (PACS), are input to a model selector. The model selector can be seen as the brain of the system which, similar to a radiologist, decides based on the image content which task to solve. The whole available knowledge is located in a model store. After a model has been selected, it is executed and the output is displayed to a graphical user interface that the corresponding medical expert can review.

Now, the question remains how the knowledge accumulates given only scarcely labeled image datasets. The following figure depicts several different ways I focus on in my research.

 

  1. One way to solve this problem is to use one or more of the currently available models to transfer parameters and fine-tune on a target task.

  2. A second way is to pre-train an unsupervised model from an unlabeled dataset, which is subsequently fine-tuned as in step 1.

  3. One may use labeled and unlabeled images at the same time to train a semi-supervised learner.

If you are interested in my research, don’t hesitate to contact me via mail or in my office. If you are looking for a project or thesis in either machine learning, software engineering or design thinking, we can together find a suitable topic for you.