Skip to Main Content

An official website of the United States government

Principal Investigator
Name
William Hsu
Degrees
PhD
Institution
David Geffen School of Medicine at UCLA
Position Title
Professor
Email
About this CDAS Project
Study
NLST (Learn more about this study)
Project ID
NLST-785
Initial CDAS Request Approval
Apr 19, 2021
Title
Multimodal Data Fusion of Radiomic, Pathomic, and Semantic Features to Classify Pulmonary Nodules
Summary
Early detection of lung cancer is crucial for improving the survival of over 220,000 individuals diagnosed with this disease each year. Although the landmark National Lung Screening Trial (NLST) demonstrated that computed tomography (CT) screening reduces lung cancer mortality, the use of CT is projected to identify a growing number of pulmonary nodules annually. While most nodules are benign, their discovery induces anxiety among patients, creates a burden on the healthcare system, and for some patients, represents early-stage cancer. A growing number of machine learning models have emerged to better predict whether a nodule is malignant but with markedly variable accuracy within specific subpopulations. The objective of this project is to establish a computational framework for multimodal data fusion of clinical, radiological imaging, digital pathology, and semantic (radiologist interpreted) features to create a more accurate and generalizable prediction model for identifying aggressive lung cancer among CT-detected nodules. Building upon our existing work in using machine learning techniques to detect and classify pulmonary nodules on CT, we will refine and validate algorithms for segmenting and extracting quantitative features from 293 cases that have matching CT and digital slides. We will examine correspondences between quantifiable cellular structures extracted from digital pathology images and shape, texture, and intensity features quantifiable from CT scans to distinguish adenocarcinoma subtypes, including acinar, lepidic, papillary, solid, and mucinous. The completion of this project will result in scalable machine learning tools that enable the joint analysis of quantitative features extracted from radiology and pathology images, evaluating their utility in predicting nodules that represent aggressive cancers.
Aims

1. Develop and validate machine learning techniques to perform image normalization, detection, segmentation, and feature extraction tasks on low-dose computed tomography scans and digital whole slide images.
2. Train and validate a prediction model that incorporates clinical, radiologic, and pathologic features to identify nodules that represent aggressive cancers.

Collaborators

Denise Aberle, MD
Ashley Prosper, MD
Alex Bui, PhD