Interpretable AI prediction of prostate tumor staging
For all AI applications, there is usually a trade-off between accuracy and interpretability. This results in either very accurate, but non-understandable models (black-box model) or in interpretable (glass-box) models, which usually don't achieve the accuracy of the black-box models. In order to overcome this and provide an intuitive decision support for clinicians, we plan to implement two ideas:
1. Ensemble learning: We want to develop black-box and glass-box models and combine them by ensemble techniques, where the glass-box models focus on interpretability and the black-box on accuracy. Additionally, we want to propose a general method to validate single models based on feature importance and feature uncertainty.
2. Finding interpretable representations of the latent variables, starting from simple models and adding complexity and features over time.
Application scenarios of our models will be developed in cooperation with clinicans from the Department of Urology at Heidelberg University Hospital.
Provide intuitive decision support for clinicians by:
- Ensemble learning to boost interpretability by quantifying feature importance and characteristics
- Proof of concept: Use interpretable representations of the latent variables in generative models for explainable staging prediction
Anna Nitschke, Universität Heidelberg
Pingchuan Ma, Universität Heidelberg
Prof. Dr. Matthias Weidemüller, Universität Heidelberg
Prof. Dr. Björn Ommer, Universität Heidelberg