Skip to Main Content

An official website of the United States government

Government Funding Lapse

Because of a lapse in government funding, the information on this website may not be up to date, transactions submitted via the website may not be processed, and the agency may not be able to respond to inquiries until appropriations are enacted. The NIH Clinical Center (the research hospital of NIH) is open. For more details about its operating status, please visit  cc.nih.gov. Updates regarding government operating status and resumption of normal operations can be found at OPM.gov.

About this Publication
Title
Cancer Survival Prediction From Whole Slide Images With Self-Supervised Learning and Slide Consistency.
Pubmed ID
37015696 (View this publication on the PubMed website)
Digital Object Identifier
Publication
IEEE Trans Med Imaging. 2023 May; Volume 42 (Issue 5): Pages 1401-1412
Authors
Fan L, Sowmya A, Meijering E, Song Y
Abstract

Histopathological Whole Slide Images (WSIs) at giga-pixel resolution are the gold standard for cancer analysis and prognosis. Due to the scarcity of pixel- or patch-level annotations of WSIs, many existing methods attempt to predict survival outcomes based on a three-stage strategy that includes patch selection, patch-level feature extraction and aggregation. However, the patch features are usually extracted by using truncated models (e.g. ResNet) pretrained on ImageNet without fine-tuning on WSI tasks, and the aggregation stage does not consider the many-to-one relationship between multiple WSIs and the patient. In this paper, we propose a novel survival prediction framework that consists of patch sampling, feature extraction and patient-level survival prediction. Specifically, we employ two kinds of self-supervised learning methods, i.e. colorization and cross-channel, as pretext tasks to train convnet-based models that are tailored for extracting features from WSIs. Then, at the patient-level survival prediction we explicitly aggregate features from multiple WSIs, using consistency and contrastive losses to normalize slide-level features at the patient level. We conduct extensive experiments on three large-scale datasets: TCGA-GBM, TCGA-LUSC and NLST. Experimental results demonstrate the effectiveness of our proposed framework, as it achieves state-of-the-art performance in comparison with previous studies, with concordance index of 0.670, 0.679 and 0.711 on TCGA-GBM, TCGA-LUSC and NLST, respectively.

Related CDAS Studies