Interpretable AI prediction of prostate tumor staging
For all AI applications, there is usually a trade-off between accuracy and interpretability. This results in either very accurate, but non-understandable models (black-box model) or in interpretable (glass-box) models, which usually don't achieve the accuracy of the black-box models. In order to overcome this and provide intuitive decision support for clinicians, we plan to implement three ideas:
1. Ensemble learning: We want to develop black-box and glass-box models and combine them by ensemble techniques, where the glass-box models focus on interpretability and the black-box on accuracy. Additionally, we want to propose a general method to validate single models based on feature importance and feature uncertainty.
2. Finding interpretable representations of the latent variables, starting from simple models and adding complexity and features over time.
Application scenarios of our models will be developed in cooperation with clinicians from the Department of Urology at Heidelberg University Hospital.
3. Cooperating with the Machine Vision & Learning Group in Ludwig Maximilian University of Munich, we also want to deploy models from the field of Computer Vision in our task to check whether these methods would be helpful to support the final decision.
Provide intuitive decision support for clinicians by:
- Ensemble learning to boost interpretability by quantifying feature importance and characteristics
- Proof of concept: Use interpretable representations of the latent variables in generative models for explainable staging prediction
Björn Ommer, Ludwig Maximilian University of Munich
Pingchuan Ma, Ludwig Maximilian University of Munich
Carlos Brandl, Heidelberg University