Skip to Main Content

An official website of the United States government

Principal Investigator
Name
Javier Alvarez Valle
Degrees
MS
Institution
Microsoft
Position Title
Principal Research Manager
Email
About this CDAS Project
Study
NLST (Learn more about this study)
Project ID
NLST-776
Initial CDAS Request Approval
Mar 30, 2021
Title
Multimodal patient predictions
Summary
No prior work has attempted to leverage deep learning model representations from fusion of medical imaging data with clinical (EMR) data over time to compare trajectory and change for a given patient. Most deep learning methods for radiology rely on the collection of large numbers of labeled radiology images for supervised training, which introduces a number of constraints. The collection of large, labeled, training sets is expensive, and can only be accomplished by well-funded research organizations. Second, it can be difficult to assign labels for many radiology tasks, particularly in multi-modal data contexts. Recently, new self-supervised methods which rely on contrastive learning have been shown to generate representations that are as good for classification as those generated using purely supervised methods. We will explore the use of a convolution-free approach to image classification built exclusively on self-attention over space and time adapting the standard Transformer architecture for multi-slice (3D) chest CT imaging by enabling spatiotemporal feature learning directly from a sequence of frame-level patches. Leveraging multi-modal data representations (fusion of CT and clinical data) in patient timelines will allow us to explore visualizations of the model’s representation of fused CT and clinical data to plot the trajectory of patients to illustrate likelihood of a variety of endpoints (mortality, response to therapy, etc). This approach could serve as a way to explore the learned representations of a given patient in a precision medicine approach. Clinicians could leverage this model to track patients’ response to various drugs and treatments as an objective tool to compare response to different management algorithms and inspire better management strategies.
Aims

- Explore multi-modal predictions about patient trajectory
- Explore self-supervised learning
- Explore transformers with CT

Collaborators

Matthew Lungren (Stanford)