Pan-cancer analyses of imaging-clinical phenotype associations
Aim 1: To connect radiology image patterns with clinical phenotypes. We will develop deep convolutional neural networks to identify the regions of interest from the radiology images, and train phenotype-classification models to identify the differences in radiology image patterns among patients with different cancer stage, survival outcomes, and demographic profiles. This approach will reveal novel radiology image features correlated with important clinical characteristics.
Aim 2: To associate the pathology image features with clinical phenotypes. We will extract quantitative image features from whole-slide pathology images and use supervised machine learning methods to associate them with patients' clinical phenotypes, such as cancer stage, demographics, and overall survival. We will also employ convolutional neural network methods to identify novel features from the histopathology images. We will visualize the image features associated with these clinical variables, and repeat these analyses for all four cancer types in PLCO to establish useful prediction models for the four cancer types in PLCO.
Aim 3: To correlate pathology and radiology image features. We will extract computational image features from pathology slides and radiology images (X-ray plain films, computed tomography, and sonography) and employ supervised machine learning techniques to identify the correlations among these image features. These analyses are expected to reveal the associations between the microscopic morphology of cancers and their radiologic findings.
Kun-Hsing Yu, MD, PhD, Harvard Medical School.