Disentangled Representation Learning of Medical Brain Images using Flow-based Models
Supervision: Prof. Dr. Nassir Navab,
Dr. Seong Tae Kim,
Matthias Keicher
Abstract
Generative Models like GANs and VAEs don't learn the data distribution directly as the distribution tends to be intractable. Instead, these models approximate a lower bound on the log-likelihood of the data (VAEs) or use an adversarial network to train the generator(GANs). Invertible flow-based models instead directly optimize for the log-likelihood of the data using normalizing flows. In this project, we study the use of flow-based models in learning meaningful, disentangled representations of medical brain images in both supervised and unsupervised settings. Flow-based models also learn meaningful latent representation which can be used for downstream tasks like meaningful image manipulation. We expect disentangled representations would allow for control over the generative factors of the images, which could be used to generate highly controlled synthetic images for training other models that require a large number of labeled or unlabelled data.
Requirements:
- Good understanding of statistics and machine learning methods.
- Very good programming skills in Python & TensorFlow? / PyTorch?
Location: