Skip to Main Content
An official website of the United States government

Interpretable AI prediction of prostate tumor staging

Principal Investigator

Name
Matthias Weidemüller

Degrees
Ph.D.

Institution
Universität Heidelberg

Position Title
Professor

Email
weidemueller@uni-heidelberg.de

About this CDAS Project

Study
PLCO (Learn more about this study)

Project ID
PLCO-897

Initial CDAS Request Approval
Jan 18, 2022

Title
Interpretable AI prediction of prostate tumor staging

Summary
We want to developt an interpretable prediction model for prostate tumor staging. The tumor staging is an important step in the patient workflow, as it sets the space of treatment options for the patient. Clinical staging is usually made by doctors based on various parameters and scores, i.e. rather a subjective category, whereas ideally it should be objective. To achieve this, we need a variety of clinical and screening data which can only be provided by PLCO prostate cancer dataset.

For all AI applications, there is usually a trade-off between accuracy and interpretability. This results in either very accurate, but non-understandable models (black-box model) or in interpretable (glass-box) models, which usually don't achieve the accuracy of the black-box models. In order to overcome this and provide an intuitive decision support for clinicians, we plan to implement two ideas:

1. Ensemble learning: We want to develop black-box and glass-box models and combine them by ensemble techniques, where the glass-box models focus on interpretability and the black-box on accuracy. Additionally, we want to propose a general method to validate single models based on feature importance and feature uncertainty.
2. Finding interpretable representations of the latent variables, starting from simple models and adding complexity and features over time.

Application scenarios of our models will be developed in cooperation with clinicans from the Department of Urology at Heidelberg University Hospital.

Aims

Provide intuitive decision support for clinicians by:
- Ensemble learning to boost interpretability by quantifying feature importance and characteristics
- Proof of concept: Use interpretable representations of the latent variables in generative models for explainable staging prediction

Collaborators

Anna Nitschke, Universität Heidelberg
Pingchuan Ma, Universität Heidelberg
Prof. Dr. Matthias Weidemüller, Universität Heidelberg
Prof. Dr. Björn Ommer, Universität Heidelberg