PublicationDetail

Chair for Computer Aided Medical Procedures & Augmented Reality
Lehrstuhl für Informatikanwendungen in der Medizin & Augmented Reality

THIS WEBPAGE IS DEPRECATED - please visit our new website

G. Carneiro, T. Peng, C. Bayer, N. Navab
Automatic Quantification of Tumour Hypoxia from Multi-modal Microscopy Images using Weakly-Supervised Learning Methods
IEEE Transactions on Medical Imaging PP(99):1-1 (bib)

In recently published clinical trial results, hypoxia-modified therapies have shown to provide more positive outcomes to cancer patients, compared with standard cancer treatments. The development and validation of these hypoxia-modified therapies depend on an effective way of measuring tumour hypoxia, but a standardised measurement is currently unavailable in clinical practice. Different types of manual measurements have been proposed in clinical research, but in this paper we focus on a recently published approach that quantifies the number and proportion of hypoxic regions using high resolution (immuno-)fluorescence (IF) and hematoxylin and eosin (HE) stained images of a histological specimen of a tumour. We introduce new machine learning-based methodologies to automate this measurement, where the main challenge is the fact that the expert annotations available for training the proposed methodologies consist of the total number of normoxic, chronically hypoxic and acutely hypoxic regions without any indication of their location in the image. Therefore, this represents a weakly-supervised structured output classification problem, where training is based on a high- order loss function formed by the norm of the difference between the manual and estimated annotations mentioned above. We propose four methodologies to solve this problem: 1) a naive method that uses a majority classifier applied on the nodes of a fixed grid placed over the input images; 2) a baseline method based on a structured output learning formulation that relies on a fixed grid placed over the input images; 3) an extension to this baseline based on a latent structured output learning formulation that uses a graph that is flexible in terms of the amount and positions of nodes; and 4) a pixel-wise labelling based on a fully convolutional neural network. Using a dataset of 89 weakly annotated pairs of IF and HE images from eight tumours, we show that the quantitative results of methods (3) and (4) above are equally competitive and superior to the naive (1) and baseline (2) methods. All proposed methodologies show high correlation values with respect to the expert annotations.
This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each authors copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.



Edit | Attach | Refresh | Diffs | More | Revision r1.13 - 30 Jan 2019 - 15:16 - LeslieCasas

Lehrstuhl für Computer Aided Medical Procedures & Augmented Reality    rss.gif