Skip to Main Content

An official website of the United States government

Principal Investigator
Name
Björn Ommer
Degrees
Ph.D.
Institution
Ludwig Maximilian University of Munich (Machine Vision & Learning Group)
Position Title
Full Professor
Email
About this CDAS Project
Study
PLCO (Learn more about this study)
Project ID
PLCOI-970
Initial CDAS Request Approval
Apr 25, 2022
Title
Interpretable AI prediction of prostate tumor staging
Summary
We want to develop an interpretable prediction model for prostate tumor staging. The tumor staging is an important step in the patient workflow, as it sets the space of treatment options for the patient. Clinical staging is usually made by doctors based on various parameters and scores, i.e. rather a subjective category, whereas ideally, it should be objective. To achieve this, we need a variety of clinical and screening data which can only be provided by PLCO prostate cancer dataset.
For all AI applications, there is usually a trade-off between accuracy and interpretability. This results in either very accurate, but non-understandable models (black-box model) or in interpretable (glass-box) models, which usually don't achieve the accuracy of the black-box models. In order to overcome this and provide intuitive decision support for clinicians, we plan to implement three ideas:
1. Ensemble learning: We want to develop black-box and glass-box models and combine them by ensemble techniques, where the glass-box models focus on interpretability and the black-box on accuracy. Additionally, we want to propose a general method to validate single models based on feature importance and feature uncertainty.
2. Finding interpretable representations of the latent variables, starting from simple models and adding complexity and features over time.
Application scenarios of our models will be developed in cooperation with clinicians from the Department of Urology at Heidelberg University Hospital.
3. Cooperating with the Machine Vision & Learning Group in Ludwig Maximilian University of Munich, we also want to deploy models from the field of Computer Vision in our task to check whether these methods would be helpful to support the final decision.
Aims

Provide intuitive decision support for clinicians by:
- Ensemble learning to boost interpretability by quantifying feature importance and characteristics
- Proof of concept: Use interpretable representations of the latent variables in generative models for explainable staging prediction

Collaborators

Björn Ommer, Ludwig Maximilian University of Munich
Pingchuan Ma, Ludwig Maximilian University of Munich
Carlos Brandl, Heidelberg University